The Evolution of Sound

Intelligence Powered Signal Processing
img

About Yobe

By design, conventional signal processors are not intuitive to the needs of the human end-user. Therefore programs are inefficient in how they decide what part of the signal is important (and should be enhanced) or is not (and should be ignored or removed). Yobe takes a fundamentally different approach by joining three separate disciplines into one highly effective solution:

  • img

    Advanced Artificial Intelligence

  • img

    Unconventional Signal Processing

  • img

    Broadcast Studio Sound Enhancement

  • img
    img

    Intelligent Intra-Human Signal Processing

    IISP is melding the mathematics of “Intelligently Reactive Signal Processing” with black-box behavioral models of human to human interaction. In other words, we use Artificial Intelligence to focus on the “human” (wet) interactive portion of the circuit when addressing signal processing. Using IISP, Yobe has designed a portfolio of Intelligent Audio Technologiesfor both Speech and Multimedia content which stands alone as a real-time automated platform that can intelligently identify, categorize, and enhance audio content in dynamically changing environments (like that of music or heavy background noise) for an unlimited amount of input and output scenarios.

  • img
    img

    A unique by-product

    A unique byproduct of the Yobe frequency manipulation process is the ability to DECREASE the audio file or stream to 30% to 50% of its original size, while simultaneously INCREASING the audio quality. Our solution allows for higher compression levels while simultaneously maintaining improved sound fidelity.

Introducing Vox.ē
Voice technology for the future

Current R&D Projects

Voice Identification System for Profile Retrieval
Far Field Voice Recognition w/ Near Field Noise
“Hands Free” Voice Authentication & Device Control
Hearing Aids
Location Enabled Gaming

VISPR (pronounced like Whisper with a “V”)

Yobe has taken the ability track voice DNA, specifically in noisy environments, into the low-power wake-word arena. With a simple utterance (i.e.: Alexa, Siri, Ok Google) Yobe is able to validate an authorized user and retrieve that user’s profiles to pre-load their device/application settings. Our solution enables your device to identity you  by recognizing your unique voice. Learn more at: http://vispr.io

 

Voice recognition systems has been optimized for near-field interaction (scenarios where the microphone is close to the user’s mouth) leveraging the use of a clear voice signal with less ambient noise for analysis. These systems fail miserably when the speaker of interest is further away (Far field) and even worst when there is noise (to include other voices) in the near-field.  Yobe has developed a suite of solutions that can track Far Field voice in high noise environments.

As the IoT market continues to grow, voice command will become the dominate ‘interface’ for many of the smart devices of the future. Yobe’s AI-powered voice recognition technology, practically speaking, is immune to the presence of background noise and brings all the power and advantages of existing voice recognition technologies into the noisy real world we live in. Yobe has partnered with an industry leading team of cognitive computing and speech & language technology specialists to create a seamless voice authentication and speech recognition platform that will operate effectively in high noise environments. Yobe is taking a page out of Science Fiction to bring voice control into the mobile 21st century.

Yobe’s frequency enhancement process converts sound (specifically in the high & high-mid frequency bandwidths) into electrical inputs that the brain can easily process, thus making sounds in the otherwise inaccessible frequency ranges available for those with hearing loss. By merging our Noise Cancellation and Biometrics solutions, Yobe has developed an answer for the dreaded “Cocktail Party” problem. Our voice separation process challenges industry standards. We are currently exploring solutions that would fit nicely into a wireless digital products.

Yobe’s Location Enabled Audio (LEA) solutions enables users to experience full spatial audio awareness in gaming environments. Adding Yobe’s enhancement creates a 3D audio event for a heightened “in-action” experience. In other words, the user will be able to hear and perceive the location of voices and sound effects in reference to where the player is positioned in the virtual game environment.

VISPR (pronounced like Whisper with a “V”)

Yobe has taken the ability track voice DNA, specifically in noisy environments, into the low-power wake-word arena. With a simple utterance (i.e.: Alexa, Siri, Ok Google) Yobe is able to validate an authorized user and retrieve that user’s profiles to pre-load their device/application settings. Our solution enables your device to identity you  by recognizing your unique voice. Learn more at: http://vispr.io

 

Voice recognition systems has been optimized for near-field interaction (scenarios where the microphone is close to the user’s mouth) leveraging the use of a clear voice signal with less ambient noise for analysis. These systems fail miserably when the speaker of interest is further away (Far field) and even worst when there is noise (to include other voices) in the near-field.  Yobe has developed a suite of solutions that can track Far Field voice in high noise environments.

As the IoT market continues to grow, voice command will become the dominate ‘interface’ for many of the smart devices of the future. Yobe’s AI-powered voice recognition technology, practically speaking, is immune to the presence of background noise and brings all the power and advantages of existing voice recognition technologies into the noisy real world we live in. Yobe has partnered with an industry leading team of cognitive computing and speech & language technology specialists to create a seamless voice authentication and speech recognition platform that will operate effectively in high noise environments. Yobe is taking a page out of Science Fiction to bring voice control into the mobile 21st century.

Yobe’s frequency enhancement process converts sound (specifically in the high & high-mid frequency bandwidths) into electrical inputs that the brain can easily process, thus making sounds in the otherwise inaccessible frequency ranges available for those with hearing loss. By merging our Noise Cancellation and Biometrics solutions, Yobe has developed an answer for the dreaded “Cocktail Party” problem. Our voice separation process challenges industry standards. We are currently exploring solutions that would fit nicely into a wireless digital products.

Yobe’s Location Enabled Audio (LEA) solutions enables users to experience full spatial audio awareness in gaming environments. Adding Yobe’s enhancement creates a 3D audio event for a heightened “in-action” experience. In other words, the user will be able to hear and perceive the location of voices and sound effects in reference to where the player is positioned in the virtual game environment.

Team Leadership

S.Hamid Nawab PhD
Chief Scientist/ Technology Advisor (Co-Founder)

An internationally renowned researcher, engineer, and educator, Nawab (PhD, MIT ’82) conducts research at the intersection of signal processing and artificial intelligence for applications in speech, audition, and neuroscience. A Professor of Electrical & Computer Engineering and Biomedical Engineering at Boston University, he is an elected fellow of AIMBE (Class of 2006) that represents the top 2% of medical and biological engineering researchers. He has published over 100 research articles and co-authored the seminal book on “Symbolic and Knowledge Based Signal Processing” (Prentice Hall, 1992).

Dr. Nawab, is supported by a team of three commercial algorithm engineers with experience in conceptualization, implementation, and evaluation of signal computing with an emphasis on applied artificial intelligence.

Ken Sutton
President and CEO (Co-Founder)

Ken Sutton (Co-Founder) serves as the company’s President and CEO. Ken is a serial entrepreneur with nearly 20 years of strategy and corporate business management experience in the technology, marketing and finance industries. Prior to his tenure in Technology, Ken was a financial services professional who worked on critical projects for hedge funds, venture capital and private equity firms who focused on Pre-IPO investments in the technology and real-estate arenas. Ken started his first venture (TMG) directly out of University. The Tampa Marketing Group was a multi-state B2B marketing firm, focused on product development, market research and brand strategy. TMG managed regional product promotions for the following companies: Vivitar, Universal Studios, Disney, MCI, Major League Sports teams and AT&T to name a few.

Ken’s other start-up experience includes: Founding Member of Co-Mune Inc (a community focused connectivity and mobile commerce platform), and Managing Partner Sutton Willis and More (a boutique strategy and capital advisory firm). Ken attended the University of Connecticut has served as a proud member of the US Armed Forces (Army Ranger).

Shey-Sheen Chang PhD
External Technology Advisor

Shey-Sheen Chang, Ph.D. Head of External Technology. Dr. Chang is a Commercial Algorithm Engineer with experience both in signal processing and applied artificial intelligence. Dr. Chang has 10 years of experience in designing, prototyping, developing, and supporting commercially successful software products for North American, European and Asian markets. He is experienced in signal processing algorithm development and has participated in several research and development projects that have resulted in fully commercialized products which are currently in the market, servicing over 40 laboratories and in over 6 countries.

Shibani Abhyankar MS
Sr. Software Engineer

Commercial DSP software engineer with experience in electronics, image processing, machine learning, and automation.

James Fairey
Senior Advisor /Audio Innovation (Co-Founder)

James Fairey (Co-Founder) serves as Yobe’s Head of Audio Innovation. James is the Director of production for one of the world’s largest media companies, and has over 25 years of broadcast and digital creation experience. For decades James has overseen and designed audio production teams with annual sales in the tens of millions of dollars. He’s the administrator for Internet audio, podcasting, and programs the non-linear audio processing for the company’s terrestrial audio and online streaming platforms. James consults on audio design and training programs for Berklee College of Music, the Art Institutes of America, Georgia State, and the University of Georgia. In the 1990s James started designing professional build-outs for large media companies. Most notably, in 1999 James designed and built all the studios (audio and film) for Susquehanna media (which was later purchased Cumulus media).

Zenon Olbrys
Financial Advisor

Zenon is an experienced CFO with a proven track record of taking concepts and executing them into valuable contributions with a focus on start-ups and transformation ventures.  Zenon is a financial and operational business expert with an accomplished career leading technology companies to achieve unprecedented success while increasing ROI for shareholders. Zenon is an Alumni of both Oxford University and The Ohio State University.

Live.
Noiscan
(Noise and Echo Cancellation Solutions)

A feed forward (single mic.) noise and echo recognition technology. Superior (HD) call quality with bandwidth reduction in a downloadable software application.
More information

Work.
VoiceBright
(VoIP Communications Solutions)

Yobe’s software solutions for incoming and outgoing VoIP communications increase the optimization of conference calls, enhance call clarity and allows for lower bandwidth requirements, all while increasing the audio quality.
More information

Play.
.y3d Audio (Media Enhancement Solutions)

Many argue that the compressed files are not a true representation of the originally produced recording hence “HD Audio" platforms have been created to address the challenge of creating high quality content in this new digital age. Yobe’s ability to increase sound quality while addressing file storage and download constraints is a clear answer to this challenge.
More information

Secure.
1Factor
(Voice Biometric Security Solutions)

The hurdles for traditional Voice Biometric solutions are system faults caused by noise and poor communications linkages. Yobe’s proprietary process eliminates unwanted artifacts and leaves a clean, enhanced voice for feature extraction and analysis.
More information

Share

News

  • 08
  • 05
  • 18
  • 08
  • 05
  • 18

There are times where voice-driven systems don’t work all that well because of background noise or other voices. That’s because it’s hard for machines (and humans) to pull out a particular voice when there are many others speaking. This is sometimes called ‘the cocktail party problem.

Yobe Inc, an industry pioneer in artificial intelligence-powered signal processing solutions, has announced that it has secured $1.8M in seed funding from Clique Capital Partners, a $100M fund for investing in transformative voice technologies. The capital will be used to accelerate the commercialization of Yobe’s intelligent voice biometrics technology as they prepare for product launch this summer.

Full article here- TechCrunch

 

 

  • 07
  • 05
  • 18
  • 07
  • 05
  • 18

Yobe, a Boston-based developer of AI-powered speech recognition and voice authentication software, announced on Tuesday that it has secured $1.8 million in seed funding from Clique Capital Partners, a $100 million fund for IoT and voice technologies headquartered in Reston, Virginia.

The new investment follows a $990,000 round of angel funding and the receipt of an undisclosed National Science Foundation grant in 2016, which was awarded to further develop a software capable of separating distinct voices from background noise.

The capital from the seed round will be used to accelerate the commercialization of Yobe’s intelligent voice biometrics technology as the company prepares for product launch this summer.

Started in 2014, Yobe was co-founded by serial entrepreneur Ken Sutton, the company’s president and CEO, and S. Hamid Nawab, a professor of electrical and computer engineering at Boston University.

Read the full article here- Bostinno

 

  • 04
  • 02
  • 17
  • 04
  • 02
  • 17

On December 1, 2016 Yobe Inc was awarded a National Science Foundation (NSF) Small Business Innovation Research (SBIR) grant to conduct research and development (R&D) work on A Cocktail Party Technology: “Real-Time Conversation Separation from Background Voices and Sounds.”  (VIDEO)

The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project is that it will for the first time make it possible to create voice technologies whose performance in speech and speaker recognition does not significantly degrade due to the presence of interfering voices or environmental sounds. This has kept many of these voice technologies out of both the mobile and IoT markets. Solving this challenge makes the noisy world of smartphones more realistic for voice technologies (like voice authentication) that to date have avoided the space. (Audio Demo)

“The National Science Foundation supports small businesses with the most innovative, cutting-edge ideas that have the potential to become great commercial successes and make huge societal impacts,” said Barry Johnson, Director of the NSF’s Division of Industrial Innovation and Partnerships. “We hope that this seed funding will spark solutions to some of the most important challenges of our time across all areas of science and technology.”

NSF accepts Phase I proposals from small businesses twice annually in June and December. Small businesses with innovative science and technology solutions, and commercial potential are encouraged to apply. All proposals submitted to the NSF SBIR/STTR program undergo a rigorous merit-based review process. (Team Photos)

To learn more about the NSF SBIR/STTR program, visit: www.nsf.gov/SBIR

  • 04
  • 02
  • 17
  • 04
  • 02
  • 17

As voice recognition finally hits parity with human performance, vendors are using vocal computing in more sophisticated ways. By Stephanie Condon for Between the Lines | |

Amazon’s voice-activated assistant Alexa made a splash at CES in 2016, and at this year’s show, Alexa is just about everywhere you look.

While Amazon has its own motivations for distributing its platform as broadly as possible, its momentum also represents a larger trend, according to Shawn DuBravac, chief economist for the Consumer Technology Association (CTA).

Speech recognition and vocal computing have reached an inflection point, he said at the Las Vegas conference, now that the word error rate (WER) has reached about 5 percent, effectively achieving human parity. In the mid ’90s, the WER was effectively 100 percent. By 2013, it was around 23 percent.

On TechRepublic: CES 2017 Unveiled: A sneak peek at the newest tech

“We’ve seen more progress in this technology in the last 30 months than we saw in the last 30 years,” DuBravac said. “Ultimately vocal computing is replacing the traditional graphical user interface.”

The CTA estimates about 5 million voice-activated digital assistants have been sold to date, and that this figure will double in 2017.

There are other factors, along with better speech recognition, that are “ushering in a new era of faceless computing,” DuBravac said. GUIs started to disappear with wearables and other non-traditional computing applications around 2010, he noted. That trend is expected to continue with “robots” like Mayfield’s Kuri or Samsung’s POWERbot VR7000 vacuum cleaner — two devices officially unveiled this week at CES.

Kuri may be a bit pricey now — it will cost $700 once it launches in the US — but ultimately, removing GUIs should help lower the price and battery requirements of such devices. DuBravac noted that GUIs themselves were initially far too expensive, with the Xerox Star hitting the market in 1981 at $75,000. It took just a few years for GUIs to become commercially viable. Thanks to voice recognition, the CTA expects home robots to grow from around 2.9 million in 2016 to 5 million by 2020.

And as the technology advances, voice recognition features will become more nuanced and useful. For instance, financial services companies are already adopting voice-activated functions. Voice recognition will also drive a more sophisticated smart home market. The company Somfy, which has for decades made automatically retractable awnings, unveiled at CES its new voice controlled, all-in-one home security system, the Somfy One. “It’s clear [voice] is the new interface of the home,” Somfy’s Jean-Marc Prunet told ZDNet.