Ultimate Horse Whisperer

I propose an extension of the “Speech-to-Text” AI services typically available for (e.g.) human language translation, but for horses.
Horse owners tend to know what their animals tell them, but there is no accepted “universal language” that humans are able to reliably interpret.

There may be different phases of the project, described briefly below as an example:

  1. Data Harvesting: a User connects to the Market via their portable device and completes a profile of secure data that identifies them and their animals – age, breed, condition, temperament etc. Data could be past recordings, or real-time input “on demand” (similar to Smart Speaker tech). It could be a free service to the User whilst the AI Corpus is being created
  2. Data Analysis: the characteristics, causes and responses for “generic sounds” are detailed on the attached pic (“004_Default Horse Sounds.jpg”). More data enables the differentiation of breeds and individual horses in different context
  3. As the Corpus and quality of analysis improves, Users are encouraged to rate feedback by the AI service on the accuracy – this provides a virtuous circle of improvement
  4. Once the quality level reaches a certain standard, then an app that is initially entertaining becomes beneficial in the longer term, contributing to the health and welfare of the animals.

3 AI Innovations In Equine Health Monitoring — EQuine AMerica Magazine (eq-am.com)

Evidence that such project proposals should be taken seriously, and have potentially immense Return on Investment value…
“…Although the horse industry has been hard to disrupt, many equestrians don’t need convincing that objective data about their horses can be hugely beneficial”

This week, I found Sonic Visualiser : free and open-source, designed for the analysis of music recordings and therefore (perhaps) a better technical alternative than the extension of AI “Speech-to-text” services that depend upon pre-training according to grammar and context of a particular language.

I’m able to generate convincing (looking) spectral analysis output from horse vocalizations - an example below for a “neigh” - but I was not able to find a good AI postprocessing function for this yet.

If “Speech-to-Text” can be extended to “Sound-to-Pattern” analysis, maybe there is no need for language pre-training and the range of Use Cases could extend to a Digital Twin model for animals, humans, machines etc. ?

A solution is not too far away, now that there is serious financial support:

1 Like

…And some more context for potentially very interesting Use Cases…