Clustering, in data mining, is the useful tool to discover hidden patterns residing in the given dataset. Many clustering algorithms have been proposed for numeric data clustering, but only a few works were suggested for categorical data clustering while many relational databases contain categorical datasets. In this paper, we suggest a new dissimilarity metric for categorical datasets. The former binary style dissimilarity metrics often fail to measure the similarity between data objects correctly when synonymous choices or unimportant attributes exist. Our dissimilarity metric solves this shortcoming by computing how far two different choices are based on the whole dataset information. To show the effectiveness of our dissimilarity metric, we conducted experiments on both synthetic and real categorical datasets. When our dissimilarity metric was applied to traditional clustering algorithms, their clustering results were greatly improved than those with previous metrics. Moreover, the performance of a hierarchical algorithm was even better than that of famous clustering algorithms dedicated to categorical datasets.