In a new entry in its Machine Learning Journal, Apple has detailed how it approached the challenge of improving Siri's ability to recognize names of local points of interest, such as small businesses and restaurants.


In short, Apple says it has built customized language models that incorporate knowledge of the user's geolocation, known as Geo-LMs, improving the accuracy of Siri's automatic speech recognition system. These models enable Siri to better estimate the user's intended sequence of words.

Apple says it built one Geo-LM for each of the 169 Combined Statistical Areas in the United States, as defined by the U.S. Census Bureau, which encompass 80 percent of the country's population. Apple also built a single global Geo-LM to cover all areas not defined by CSAs around the world.

When a user queries Siri, the system is customized with a Geo-LM based on the user's current location. If the user is outside of a CSA, or if Siri doesn't have access to Location Services, the system defaults to the global Geo-LM.

Apple's journal entry is highly technical, and quite exhaustive, but hopefully this means that Siri should be able to better understand the names of local points of interest, and also be able to better distinguish between a Tom's Restaurant in Iowa and Kansas based on a user's geolocation.

In its testing, Apple found that the customized language models reduced Siri's error rate by between 41.9 and 48.4 percent in eight major U.S. metropolitan regions: Boston, Chicago, Los Angeles, Minneapolis, New York, Philadelphia, Seattle, and San Francisco, excluding mega-chains like Walmart.

Siri still trails Google Assistant in overall accuracy, according to a recent study by research firm Loup Ventures, but hopefully these improvements eliminate some of the frustration of querying Siri about obscurely named places.


Discuss this article in our forums