The Evolution of Sound

Intelligence Powered Signal Processing
img

About Yobe

By design, conventional signal processors are not intuitive to the needs of the human end-user. Therefore programs are inefficient in how they decide what part of the signal is important (and should be enhanced) or is not (and should be ignored or removed). Yobe takes a fundamentally different approach by joining three separate disciplines into one highly effective solution:

  • img

    Advanced Artificial Intelligence

  • img

    Unconventional Signal Processing

  • img

    Broadcast Studio Sound Enhancement

  • img
    img

    Intelligent Intra-Human Signal Processing

    IISP is melding the mathematics of “Intelligently Reactive Signal Processing” with black-box behavioral models of human to human interaction. In other words, we use Artificial Intelligence to focus on the “human”(wet) interactive portion of the circuit when addressing signal processing. Using IISP "Yobe” has designed a portfolio Intelligent Audio Technologiesfor both Speech and Multi Media content which stands alone as a real-time automated platform that can intelligently identify, categorize, and enhance audio content in dynamically changing environments (like that of music or heavy background noise) for an unlimited amount of input and output scenarios.

  • img
    img

    A unique by-product

    A unique by-product the Yobe frequency manipulation process is the ability to DECREASE the audio file or stream from 30% to 50% of its original size, while simultaneously INCREASING the audio quality. Our solution allows for higher compression levels while simultaneously maintaining improved sound fidelity.

Introducing Vox.ē
Voice technology for the future

Current R&D Projects

“Hands Free” Voice Authentication & Device Control
2 Speaker Surround Sound
Hearing Aids
Location Enabled Gaming

As the IoT market continues to grow voice command will become the dominate ‘interface’ for many of the smart devices of the future.  Yobe’s AI powered voice recognition technology, practically speaking, is immune to the presence of background noise and brings all the power and advantages of existing voice recognition technologies into the noisy real world we live in. Yobe has partnered with an industry leading team of cognitive computing, speech & language technology specialist to create a seamless voice authentication and speech recognition platform that will operate effectively in high noise environments. Yobe is taking a page out of Science Fiction to bring voice control into the mobile 21st century.

As available real estate in today’s televisions gets smaller and smaller, the fact is  there is much less room for speakers, Yobe understands and embraces this fact.  Through better audio processing (even with smaller condensed speakers), Yobe can deliver a fuller and richer sound experience.  Yobe’s open air technology is rooted in 5.1 sound through 2 speakers allowing a one box audio solution.  Yobe’s 5 band spatialization process manipulates the audio’s frequencies to deliver clear speech and the spatialization of non-speech elements (like sound effects).

Yobe’s frequency enhancement process converts sound (specifically in the high & high-mid frequency bandwidths) into electrical inputs that the brain can easily process thus making sounds in the otherwise inaccessible frequencies ranges available for those with hearing loss. By merging our Noise cancelation and Biometrics solutions Yobe has developed an answer for the dreaded “Cocktail Party” problem. Our voice separation solution process challenges industry standards. We are currently exploring solutions that would fit nicely into a wireless digital products.

Yobe’s Location Enabled Audio (LEA) solutions enables users to experience full spatial audio awareness in gamming environments. Adding Yobe’s enhancement creates a 3D audio event for a heightened “in-action” experience, in other words, the user will be able to hear and perceive the location of voices and sound effects in reference to where the player is positioned in the virtual game environment.

As the IoT market continues to grow voice command will become the dominate ‘interface’ for many of the smart devices of the future.  Yobe’s AI powered voice recognition technology, practically speaking, is immune to the presence of background noise and brings all the power and advantages of existing voice recognition technologies into the noisy real world we live in. Yobe has partnered with an industry leading team of cognitive computing, speech & language technology specialist to create a seamless voice authentication and speech recognition platform that will operate effectively in high noise environments. Yobe is taking a page out of Science Fiction to bring voice control into the mobile 21st century.

As available real estate in today’s televisions gets smaller and smaller, the fact is  there is much less room for speakers, Yobe understands and embraces this fact.  Through better audio processing (even with smaller condensed speakers), Yobe can deliver a fuller and richer sound experience.  Yobe’s open air technology is rooted in 5.1 sound through 2 speakers allowing a one box audio solution.  Yobe’s 5 band spatialization process manipulates the audio’s frequencies to deliver clear speech and the spatialization of non-speech elements (like sound effects).

Yobe’s frequency enhancement process converts sound (specifically in the high & high-mid frequency bandwidths) into electrical inputs that the brain can easily process thus making sounds in the otherwise inaccessible frequencies ranges available for those with hearing loss. By merging our Noise cancelation and Biometrics solutions Yobe has developed an answer for the dreaded “Cocktail Party” problem. Our voice separation solution process challenges industry standards. We are currently exploring solutions that would fit nicely into a wireless digital products.

Yobe’s Location Enabled Audio (LEA) solutions enables users to experience full spatial audio awareness in gamming environments. Adding Yobe’s enhancement creates a 3D audio event for a heightened “in-action” experience, in other words, the user will be able to hear and perceive the location of voices and sound effects in reference to where the player is positioned in the virtual game environment.

Team Leadership

S.Hamid Nawab PhD
Chief Scientist/ Technology Advisor (Co-Founder)

An internationally renowned researcher, engineer, and educator, Nawab (PhD, MIT ’82) conducts research at the intersection of signal processing and artificial intelligence for applications in speech, audition, and neuroscience. A Professor of Electrical & Computer Engineering and Biomedical Engineering at Boston University, he is an elected fellow of AIMBE (Class of 2006) that represents the top 2% of medical and biological engineering researchers. He has published over 100 research articles and co-authored the seminal book on “Symbolic and Knowledge Based Signal Processing” (Prentice Hall, 1992).

Dr. Nawab, is supported by a team of three commercial algorithm engineers with experience in conceptualization, implementation, and evaluation of signal computing with an emphasis on applied artificial intelligence.

Ken Sutton
President and CEO (Co-Founder)

Ken Sutton (Co-Founder) serves as the company’s President and CEO. Ken is a serial entrepreneur with nearly 20 years of strategy and corporate business management experience in the technology, marketing and finance industries. Prior to his tenure in Technology, Ken was a financial services professional who worked on critical projects for hedge funds, venture capital and private equity firms who focused on Pre-IPO investments in the technology and real-estate arenas. Ken started his first venture (TMG) directly out of University. The Tampa Marketing Group, was a multi-state B2B marketing firm, focused on product development, market research and brand strategy. TMG managed regional product promotions for companies like: Vivitar, Universal Studios, Disney, MCI, Major League Sports teams and AT&T to name a few.

Ken’s other start-up experience includes: Founding Member of Co-Mune Inc (a community focused connectivity and mobile commerce platform), and Managing Partner Sutton Willis and More (a boutique strategy and capital advisory firm). Ken attended the University of Connecticut has served as a proud member of the US Armed Forces (Army Ranger).

Shey-Sheen Chang PhD
Head of External Technology (Senior Advisor)

Shey-Sheen Chang, Ph.D. Head of External Technology. Dr. Chang is a Commercial Algorithm Engineer with experience both in signal processing and applied artificial intelligence. Dr. Chang has 10 years of experience in designing, prototyping, developing, and supporting commercially successful software products for North American, European and Asian markets. He is experienced in signal processing algorithm development and has participated in several research and development projects that have resulted in fully commercialized products which are currently in the market, servicing over 40 laboratories and in over 6 countries.

James Fairey
Senior Advisor /Audio Innovation (Co-Founder)

James Fairey (Co-Founder) serves as Yobe’s Head of Audio Innovation. James is the Director of production for one of the world’s largest media companies, and has over 25 years of broadcast and digital creation experience. For decades James has overseen and designed audio production teams with annual sales in the tens of millions of dollars. He’s the administrator for Internet audio, podcasting, and programs the non-linear audio processing for the company’s terrestrial audio and online streaming platforms. James consults on audio design and training programs for Berklee College of Music, the Art Institutes of America, Georgia State, and the University of Georgia. In the 1990s James started designing professional build-outs for large media companies. Most notably, in 1999 James designed and built all the studios (audio and film) for Susquehanna media (which was later purchased Cumulus media).

Zenon Olbrys
Financial Advisor/ Interim CFO

Zenon is an experienced global business executive specializing in start-ups and transformation ventures. He is serving as Yobe’s financial advisor and interim CFO. Zenon is a financial and operational business expert with an accomplished career leading technology companies to achieve unprecedented success while increasing ROI for shareholders. Zenon is an Alumni of both Oxford University and The Ohio State University.

Live.
Noiscan
(Noise and Echo Cancellation Solutions)

A feed forward (single mic.) noise and echo recognition technology. Superior (HD) call quality with bandwidth reduction in a downloadable software application.
More information

Work.
VoiceBright
(VoIP Communications Solutions)

Yobe’s software solutions for incoming and outgoing VoIP communications increase the optimization of conference calls, enhance call clarity and allows for lower bandwidth requirements, all while increasing the audio quality.
More information

Play.
.y3d Audio (Media Enhancement Solutions)

Many argue that the compressed files are not a true representation of the originally produced recording hence “HD Audio" platforms have been created to address the challenge of creating high quality content in this new digital age. Yobe’s ability to increase sound quality while addressing file storage and download constraints is a clear answer to this challenge.
More information

Secure.
1Factor
(Voice Biometric Security Solutions)

The hurdles for traditional Voice Biometric solutions are system faults caused by noise and poor communications linkages. Yobe’s proprietary process eliminates unwanted artifacts and leaves a clean, enhanced voice for feature extraction and analysis.
More information

Share

News

SBIR Team
  • 04
  • 02
  • 17
  • 04
  • 02
  • 17

On December 1, 2016 Yobe Inc was awarded a National Science Foundation (NSF) Small Business Innovation Research (SBIR) grant to conduct research and development (R&D) work on A Cocktail Party Technology: “Real-Time Conversation Separation from Background Voices and Sounds.”  (VIDEO)

The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project is that it will for the first time make it possible to create voice technologies whose performance in speech and speaker recognition does not significantly degrade due to the presence of interfering voices or environmental sounds. This has kept many of these voice technologies out of both the mobile and IoT markets. Solving this challenge makes the noisy world of smartphones more realistic for voice technologies (like voice authentication) that to date have avoided the space. (Audio Demo)

“The National Science Foundation supports small businesses with the most innovative, cutting-edge ideas that have the potential to become great commercial successes and make huge societal impacts,” said Barry Johnson, Director of the NSF’s Division of Industrial Innovation and Partnerships. “We hope that this seed funding will spark solutions to some of the most important challenges of our time across all areas of science and technology.”

NSF accepts Phase I proposals from small businesses twice annually in June and December. Small businesses with innovative science and technology solutions, and commercial potential are encouraged to apply. All proposals submitted to the NSF SBIR/STTR program undergo a rigorous merit-based review process. (Team Photos)

To learn more about the NSF SBIR/STTR program, visit: www.nsf.gov/SBIR

img1
  • 04
  • 02
  • 17
  • 04
  • 02
  • 17

As voice recognition finally hits parity with human performance, vendors are using vocal computing in more sophisticated ways. By Stephanie Condon for Between the Lines | |

Amazon’s voice-activated assistant Alexa made a splash at CES in 2016, and at this year’s show, Alexa is just about everywhere you look.

While Amazon has its own motivations for distributing its platform as broadly as possible, its momentum also represents a larger trend, according to Shawn DuBravac, chief economist for the Consumer Technology Association (CTA).

Speech recognition and vocal computing have reached an inflection point, he said at the Las Vegas conference, now that the word error rate (WER) has reached about 5 percent, effectively achieving human parity. In the mid ’90s, the WER was effectively 100 percent. By 2013, it was around 23 percent.

On TechRepublic: CES 2017 Unveiled: A sneak peek at the newest tech

“We’ve seen more progress in this technology in the last 30 months than we saw in the last 30 years,” DuBravac said. “Ultimately vocal computing is replacing the traditional graphical user interface.”

The CTA estimates about 5 million voice-activated digital assistants have been sold to date, and that this figure will double in 2017.

There are other factors, along with better speech recognition, that are “ushering in a new era of faceless computing,” DuBravac said. GUIs started to disappear with wearables and other non-traditional computing applications around 2010, he noted. That trend is expected to continue with “robots” like Mayfield’s Kuri or Samsung’s POWERbot VR7000 vacuum cleaner — two devices officially unveiled this week at CES.

Kuri may be a bit pricey now — it will cost $700 once it launches in the US — but ultimately, removing GUIs should help lower the price and battery requirements of such devices. DuBravac noted that GUIs themselves were initially far too expensive, with the Xerox Star hitting the market in 1981 at $75,000. It took just a few years for GUIs to become commercially viable. Thanks to voice recognition, the CTA expects home robots to grow from around 2.9 million in 2016 to 5 million by 2020.

And as the technology advances, voice recognition features will become more nuanced and useful. For instance, financial services companies are already adopting voice-activated functions. Voice recognition will also drive a more sophisticated smart home market. The company Somfy, which has for decades made automatically retractable awnings, unveiled at CES its new voice controlled, all-in-one home security system, the Somfy One. “It’s clear [voice] is the new interface of the home,” Somfy’s Jean-Marc Prunet told ZDNet.

article
  • 09
  • 07
  • 16
  • 09
  • 07
  • 16

PCC Technology Group, LLC (PCC), a global provider of technology solutions in the government and energy sectors and Yobe Inc., a New York-based company that develops artificial intelligence powered algorithmic software, have created a joint venture to market the first Voice Authentication product that blocks out background noise. The new company, Cenuity V-ID, will be instrumental to the growth of biometric voice identification and authentication in the mobile device industry.

Cenuity V-ID’s differentiator is found in Yobe’s YVA technology, which employs an artificial intelligence engine and a revolutionary digital voice processing approach to distinguish voice from background noise. “The YVA technology is capable of identifying an increased number of biometrics (acoustic resonances, tone, pitch tracks, etc.) under noisy conditions, thus providing significantly higher accuracy and a lower rate of false positives,” said Hamid Nawab, PhD, and chief technology advisor to Yobe Inc.

“This joint venture marries the strength of Yobe’s intelligent audio technologies and PCC’s 20 years of experience providing mission critical applications to create a revolutionary voice biometric identification platform that is tailor-made for the mobile environment,” said Ken Sutton, CEO of Yobe Inc. “Cenuity V-ID solves the major issue with error rates due to background noise and will expand the market for both voice and speech recognition solutions. Voice biometrics can now achieve higher accuracy levels and compete with technologies such as iris scanning and face recognition at a fraction of the cost.”

Full article here: http://www.prweb.com/releases/PCCTechnology/Yobe/prweb12099742.htm