Proprietary Technology

Ignite-TEK utilizes automatic speech recognition (ASR) and machine training (MT) with deep learning capabilities to ingest customer inquiries (calls, emails, text, chat and social media) and use this data to create meaningful and searchable data.  The artificial intelligence of LucidCX then utilizes a series of analytical algorithms, pattern recognition, media attribution, and aggregated feedback for a 360 degree view which can help companies protect their brand, mitigate legal risks and improve compliance, amongst other things.

  • Customized Machine Learning/Deep Neural Network Models
  • Cutting-Edge Automatic Speech Recognition
  • SAAS-Based Architecture; Easy to Implement
  • Former Government and Media Monitoring Solution (View Whitepaper)


AppTek’s speech recognition and machine translation engine

The AppTek speech recognition engine is made up of a series of acoustic feature extractors, acoustic models, language models, lexicon, decoders, and post-processing which work together to convert sounds to words through machine translation and “confidence scores” of keywords. AppTeks acoustic models are trained with deep neural networks (simulations of human neurons with many layers of processing) which helps identify keywords and learn the probability of what word is being spoken. For example, the sound “weh – thur” could translate to “whether” or “weather”, however the context of “rainy” or “sunny” increased the likelihood of that translation.  As we work with our clients, our technology learns over time and improves over the course of time.

Take a deeper look at  AppTek technology.


The Ignite-TEK Difference

Most speech recognition engines come in a one-size-fits-all model where multiple clients across multiple disciplines share the same machine learning algorithms to generate output.  At Ignite-TEK, we build customized speech models tailored towards individual clients.  The process begins with running speech records through the base lexicon, and then analyzing the output for accuracy.  By feeding both custom client audio files and client call scripts for contextual feedback, the model builds a customized solution which returns superior accuracy versus out of the box solutions.