I've been working on a single page web application using React. One feature uses a JSON
list of Colombian cities:
"colombiaCities": [
{
"city": "Bogotá",
"lat": 4.6126,
"lng": -74.0705,
"country": "Colombia",
"iso2": "CO",
"adminName": "Distrito Capital",
"capital": "primary",
"population": 9464000,
"populationProper": 7963000
},
// ...515 other cities
A user can search for a country to get some details. Their input will be in English, with no accent marks. They could type them if they wanted to, but they choose not to. That's not very cosmopolitan of them.
This user will fail to get their information with no accent marks (sometimes). When the accentless string is compared to the accented string in the object above (equalizing case using toLowerCase()), it is compared character by character. All is well until the a
in Bogota
is put up against á
in Bogotá
. They need a way to compare these words as equals.
Thus began my search. I first came across the normalize()
method. It looks like this:
const str = 'ÁÉÍÓÚáéíóúâêîôûàèìòùÇç';
const parsed = str.normalize('NFD').replace(/[\u0300-\u036f]/g, '');
console.log(parsed)
Regular Expressions still scare me off, so I moved on to the next search result: localeCompare(). This looked more my speed, plus it's already doing the comparing I need. It looks like this:
const enStr = 'Bogota'
const esStr = 'Bogotá'
enStr.localeCompare(esStr, 'es-CO', { sensitivity: 'base' })
I actually looked into this term before localeCompare()
: Full Text Search. It's pretty heavy duty. In JavaScript, this can come in the form of a library dependency like FlexSearch. Far too bulky for the humble sorting task I have at hand.
I will choose localeCompare()
for my word comparison and city sorting features.
Top comments (0)