Friday, November 25, 2016

VR pop up book

http://uploadvr.com/peronio-pop-up-book-vr-ar/

AI lip reading surpasses human

http://thetechportal.com/2016/11/25/google-ai-lip-reading-tv/

Yet another AI update from Google, folks. A couple days ago, we saw how Google’s translation neural networks successfully created their own secret language to talk about things we can’t comprehend. Well, now they’re working to create the most accurate lip-reading software ever — even advanced than our own ‘human’ skills.
Researchers from Google’s DeepMind AI division and the University of Oxford are working together on this project. To accomplish the task, a cohort of scientists fed thousands of hours of TV footage — 5000 to be precise — from the BBC to a neural network. It was made to watch six different TV shows, which aired between the period of January 2010 and December 2015. This included 118,000 difference sentences and some 17,500 unique words.
The primary task of the AI was to annotate the video footage. But, in the published research paper, Google and Oxford researchers describe their ambition for the project as:
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos
To understand the progress, it successfully deciphered words with a 46.8 percent accuracy. The neural network had to recognize the same based on mouth movement analysis. The under 50 percent accuracy might seem laughable to you but let me put things in perspective for you. When the same set of TV shows were shown to a professional lip-reader, they were able to decipher only 12.4 percent of words without error. Thus, one can understand the great difference in the capability of the AI as compared to a human expert in that particular field.
But this is not the only transcribing AI which has surfaced recently. This research follows pursuit to a similar work published by a separate group of researchers from University of Oxford itself. Using similar techniques but different input data, this cohort was able to develop a lip-reading AI called LipNet.
This neural network achieved 93.4 percent accuracy during analysis, compared to 52.3 percent human accuracy. It was able to attain such high numbers because the research group tested the AI using specially recorded footage where volunteers speaking formulaic sentences.
But the two research groups are now looking to expedite their analysis and use materials from each other’s research to truly understand the capabilities of their individual neural networks. Google’s DeepMind researchers have christened the said neural network — “Watch, Listen, Attend, and Spell” and it could most likely be used for a host of application. The scientists believe it might help hearing-impaired people understand conversations, and could also be helpful in annotating silent films, and instructing virtual assistant by mouthing words to a camera.
A hands-on guy fascinated by new apps, technologies and enterprise products.
anmol@thetechportal.in

Tuesday, November 15, 2016

Voice analytics - emotion and health

http://www.beyondverbal.com/

http://www.nydailynews.com/life-style/health/voice-heart-disease-study-article-1.2874222


This isn’t just talk — science is getting closer to using your voice to diagnose whether you have heart disease and other disorders.
The Mayo Clinic teamed up with Beyond Verbal, a voice analytics company, to identify links between vocal features and coronary artery disease. CAD is the most common heart disease, where plaque builds up in the arteries, causing heart attacks. The study’s diagnostic tool found that a single biomarker in the voice signal was associated with a 19-fold increased likelihood of CAD.
Beyond Verbal’s previous research has also suggested a link between voice signal characteristics and neurological disorders such as dyslexia, Parkinson’s Disease, ADHD and autism, but this is the first study to link vocal biomarkers with heart disease.
“This is so groundbreaking and new, that it’s hard to describe in layman’s terms,” Yuval Mor, the CEO of Beyond Verbal, told the News.
That’s because the vocal characteristics we’re talking about here are much more specialized than the volume or timber of a person’s speech. Mor compared it to human vision, where the naked eye can see a range of wavelengths, but it takes specialized sensors to make infrared waves or ultraviolet rays visible to us. Beyond Vision’s diagnostic tool extracts vocal information in a similar fashion. “It can analyze the voice and identify different medical conditions in a way that the human ear can’t hear,” he said.

Model Released (MR)

“This is so groundbreaking and new, that it’s hard to describe in layman’s terms,” Yuval Mor, the CEO of Beyond Verbal, told the News.

 (ANNIE ENGEL/GETTY IMAGES/CULTURA RF)
In this double-blind study, 120 patients each gave three 30-second voice recordings in English, which were documented and analyzed by the voice analysis tool. It found a strong relationship between certain vocal characteristics and CAD.
The team plans to repeat the experiment in China and Israel to determine if the same correlation will show up in different languages. They are also going to test for any voice characteristics linked to other cardiovascular diseases.
Mor suggested that doctors could eventually diagnose medical conditions remotely by analyzing patients’ voice recordings.
“The idea eventually is to give people an app so we can check on them and tell them if everything is OK,” he said. “We are opening the door for something completely new that can make a huge difference in the medical community.”

Apple AR 2018

http://www.pcworld.com/article/3141793/consumer-electronics/apple-said-to-be-eyeing-wearable-ar-glasses.html

Apple said to be eyeing wearable AR glasses

The iPhone maker would enter a crowded VR and AR market where Microsoft, Facebook and Google play


Apple is working on wearable digital glasses that would connect wirelessly to the iPhone and show content in the wearer’s field of vision, according to a news report.
The iPhone maker has indicated previously its interest in augmented reality. Unlike the simulated world of virtual reality, AR supplements with images and information the user’s normal view of the world.
We are high on AR for the long run. We think there are great things for customers and a great commercial opportunity,” Apple CEO Tim Cook said in an earnings call in July, talking about the need for Apple’s devices to work with other developers' products, such as the successful Pokémon Go game.
The company has also hired VR and AR experts and made some acquisitions that could help it meet its AR goals. Apple has discussed the glasses project with potential suppliers, reported Bloomberg on Monday, citing people familiar with those discussions.
The company may be close to a prototype stage as it is said to have ordered small quantities of near-eye displays from one supplier for testing. The company hasn’t, however, ordered components in numbers that would suggest that mass production plans are imminent, a person told Bloomberg.
Apple could not be immediately reached for comment. The company will be entering a crowded markets where Microsoft is pushing its AR glasses, called HoloLens, while Facebook-owned Oculus VR is targeting the VR market. Google launched its VR headset called Daydream View that will work with compatible phones including the company's Pixel smartphones.
Apple’s glasses will be introduced in 2018 at the earliest, if the company goes ahead with the project, according to a Bloomberg source. New product categories, including the AR glasses, could help the company make up for falling iPhone sales.

Saturday, November 12, 2016

Semantic scholar

https://www.semanticscholar.org/

Semantic scholar detail:
http://allenai.org/semantic-scholar/

Visual:
http://allenai.org/plato/

Allen Institute for Artificial Intelligence in Seattle, Washington

A Computer Program Has Ranked the Most Influential Brain Scientists of the Modern Era (sciencemag.org)18

sciencehabit writes from a report via Science Magazine:A computer program has parsed the content of 2.5 million neuroscience articles, mapped all of the citations between them, and calculated a score of each author's influence on the rest to determine the most influential brain scientists of the modern era. The program, called Semantic Scholar, is an online tool built at the Allen Institute for Artificial Intelligence in Seattle, Washington. It hopes to expand to all of the biomedical literature next year, over 20 million papers. The program sees much more than the typical academic search engine, says the project leader. "We are using machine learning, natural language processing, and [machine] vision to begin to delve into the semantics."

Wednesday, November 9, 2016

IBM Project Intu - watson on any device

http://siliconangle.com/blog/2016/11/09/ibm-offers-developers-a-way-to-extend-watsons-capabilities-into-any-device/

IBM Corp. is beefing up its cognitive computing efforts with the launch of a new system-agnostic platform called Project Intu that’s designed to enable what it calls “embodied cognition” in a range of devices.
In IBM’s parlance, “cognitive computing” refers to machine learning. The idea behind Project Intu is that developers will be able to use the platform to embed the various machine learning functions offered by IBM’s Watson service into various applications and devices, and make them work across a wide spectrum of form factors.
So, for example, developers will be able to use Project Intu’s capabilities to embed machine learning capabilities into pretty much any kind of device, from avatars to drones to robots and just about any other kind of Internet of Things’ device. As a result, these devices will be able to “interact more naturally” with users via a range of emotions and behaviors, leading to more meaningful and immersive experiences for users, IBM said.
One of the best examples of where Project Intu might be able to help out developers is in the area of conversation, language and visual recognition. Here, developers can integrate Watson’s abilities with a device’s capabilities to effectively “act out” interactions with users. So, rather than the developer having to program each device or avatar’s individual movements, Project Intu does it for them, combining movements that are appropriate for the specific task the device or avatar is performing, such as greeting a visitor at a hotel, or helping out a customer in a retail store.
“IBM is taking cognitive technology beyond a physical technology interface like a smartphone or a robot toward an even more natural form of human and machine interaction,” said Rob High, an IBM Fellow, and vice president and chief technology officer of Watson. “Project Intu allows users to build embodied systems that reason, learn and interact with humans to create a presence with the people that use them – these cognitive-enabled avatars and devices could transform industries like retail, elder care, and industrial and social robotics.”
What’s more, because Project Intu is system-agnostic, developers can use it to build cognitive experiences on a wide range of operating systems, be it Raspberry PI, MacOS, Windows or Linux.
Project Intu is still an experimental platform, and it can be accessed via the Watson Developer Cloud, the Intu Gatewayand also on GitHub. IBM is hoping developers will play around with the platform and provide feedback before launching it as a fully-fledged beta in the near future.
Photo Credit: NVIDIA Corporation Flickr via Compfight cc

Mike Wheatley

Mike Wheatley is a senior staff writer at SiliconANGLE. He loves to write about Big Data and the Internet of Things, and explore how these technologies are evolving and helping businesses to become more agile.

Before joining SiliconANGLE, Mike was an editor at Argophilia Travel News, an occassional contributer to The Epoch Times, and has also dabbled in SEO and social media marketing. He usually bases himself in Bangkok, Thailand, though he can often be found roaming through the jungles or chilling on a beach.

Got a news story or tip? Email Mike@SiliconANGLE.com.

Tuesday, November 8, 2016

Bank of America 'Erica' voice assistant


Erica will use artificial intelligence, predictive analytics and cognitive messaging to help customers do things like make payments, check balances, save money and pay down debt. She will also direct people to look up their FICO score and check out educational videos and other content.
Through Erica, Bank of America hopes to extend some of the benefits of the one-to-one personal service and advice usually reserved for top-tier customers to the masses, said Moore.

Sunday, November 6, 2016

Voice assistant comparison

http://www.nytimes.com/2016/01/28/technology/personaltech/siri-alexa-and-other-virtual-assistants-put-to-the-test.html

Photo
CreditMinh Uong/The New York Times
WHEN I asked Alexa earlier this week who was playing in the Super Bowl, she responded, somewhat monotonously, “Super Bowl 49’s winner is New England Patriots.”
“Come on, that’s last year’s Super Bowl,” I said. “Even I can do better than that.”
At the time, I was actually alone in my living room. I was talking to the virtual companion inside Amazon’s wireless speaker, Echo, which was released last June. Known as Alexa, she has gained raves from Silicon Valley’s tech-obsessed digerati and has become one of the newest members of the virtual assistants club.
All the so-called Frightful Five tech behemoths — AppleMicrosoftAmazonFacebook and Google, now part of Alphabet — now offer virtual assistants, which handle tedious tasks in response to voice commands or keystrokes, on various devices. Apple’s Siri is the best known, having been available since 2011, but Microsoft now has Cortana, Facebookis testing one called M, and Google builds its voice assistant into its search apps.
These companies are presenting scorecards of their progress with quarterly earnings reports in the next few weeks, so what better time to hand out report cards to their artificially intelligent assistants? With that in mind, I set up tests for the assistants and graded their abilities to accomplish 16 tasks in categories that most consumers generally enjoy: music, productivity, travel and commuting, dining, entertainment and interests like sports.

Future of Apple voice-first tech

https://medium.com/@brianroemmele/has-apple-lost-its-way-with-ai-cde76172a630#.8chexdyyi



facebook messenger assistant

http://digitalpash.com/what-makes-facebook-m-different-from-siri-google-now-and-cortana/

http://www.recode.net/2015/11/3/11620286/facebooks-virtual-assistant-m-is-super-smart-its-also-probably-a-human

Summary:

M is intended to assist with tasks, less than to help make a schedule or search the web.

M is currently human-driven, learning from human responses to requests as a way to learn.

Samsung digital assistant - Viv



Acquired for undisclosed amount.

Like Google Assistant, Viv is designed to answer natural language queries by integrating with a variety of web services. But where Google already has a range of in-house services -- Maps, Gmail, search -- from which to gather context, Viv aims to build an open ecosystem. Many of the useful functions will be delivered by third party developers, a model similar to the one Amazon.com is pursuing for its Echo devices.
Initially, the Viv team planned to sell the service to consumer electronics manufacturers and app developers as a way to give AI capabilities to the Internet of Things.
CEO Kittlaus told TechCrunch back in May that the company's goal was "ubiquity."
Samsung's agreeing to buy the company, though, is going to limit the number of vendors likely to be distributing Viv. Being in every room in the home is still a possibility, though, as in addition to smartphones Samsung also makes TVs, refrigerators, washing machines, air conditioners, microwave ovens and robot vacuum cleaners.
Announcing the deal on Thursday, Samsung said it is committed to virtual personal assistants as part of the company's broader vision to deliver an AI-based open ecosystem across all of its devices and services.


That continuity is good news for developers who had invested in integrating their services with Viv, but finding themselves now tied to Samsung may not be so welcome.
Samsung already has a personal assistant on its smartphones: S Voice. Early versions of this used the Vlingo speech-to-text engine, while more recent ones rely on a speech-to-text service provided by Nuance Communications, the same one used by Viv and which also powered Siri. It hasn't said whether it will replace S Voice with Viv on future phones, or integrate Viv's capabilities into future versions of S Voice.

http://tech.firstpost.com/news-analysis/samsung-has-agreed-to-acquire-viv-a-next-generation-ai-assistant-339339.html

http://viv.ai/

https://techcrunch.com/2016/05/09/siri-creator-shows-off-first-public-demo-of-viv-the-intelligent-interface-for-everything/

https://techcrunch.com/2016/10/05/samsung-acquires-viv-a-next-gen-ai-assistant-built-by-creators-of-apples-siri/

https://www.cnet.com/news/samsung-to-launch-digital-assistant-with-galaxy-s8-artifiicial-intelligence-viv/

Samsung plans to launch an artificial intelligence digital assistant with its upcoming Galaxy S8, the company said Sunday according to Reuters.
The announcement comes a month after Samsung revealed it had agreed to acquire the artificial intelligence startup behind Viv -- a voice assistant that aims to handle everyday tasks for you all on its own. Samsung plans to incorporate the platform into its line of Galaxy phones, as well as home appliances and wearable devices, Reutersreported.
Artificial intelligence, a term used for the ability of a machine, computer or system to exhibit humanlike intelligence, is widely expected to represent the next frontier of computing. With that in mind, AI-powered voice assistants have suddenly become all the rage, offering a hands-free and more natural way to ask questions, find information and manage busy lives.
Samsung is hoping the digital assistant will help it rebound from the public relations and business nightmare created by the recalls and cancellation of the overheating Galaxy Note 7. Last month, Samsung's mobile division, the division responsible for the Note 7, reported a decline in operating profit of about 96 percent -- its lowest in nearly eight years.
Samsung did not immediately respond to a request for comment.

Saturday, November 5, 2016

Watson IOT

http://www.ibm.com/internet-of-things

drone light shows



http://www.marketwatch.com/story/can-drones-replace-fireworks-2016-11-04

On a cold October night in the quiet industrial town of Krailling, Germany, a single blinking drone flew into the air. The distant light was soon joined by 499 more, lighting up in the shape of the number 500.
They then flew to new positions, this time spelling out the word “Intel.”
The light show was a proof of concept for Intel Corp.’s INTC, -0.94%   leap into the drones-as-entertainment business. Intel on Friday announced a drone called the “Shooting Star,” a flying contraption about the weight of a volleyball that can light up in 4 billion color combinations for commercial entertainment light shows.

–– ADVERTISEMENT ––

Intel is not alone in producing drones as a form of nighttime entertainment that could augment or replace fireworks. The Walt Disney Co. DISN, +2.28%   has filed numerous patents for drones that it dubbed “Flixels,” as first reported by MarketWatch in August 2014. The patents indicate that the drones would follow pre-programmed flight paths and emit LED lights at various intervals, lighting up the sky. Others might be able to fly through the air with puppets suspended from the base.

Disney
Images from a patent Disney filed involving drones in nighttime entertainment

Drones should prove to be safer than traditional fireworks, which accounted for an estimated 10,500 trips to U.S. emergency rooms in 2014 and at least 11 deaths, according to the U.S. Consumer Product Safety Commission. The U.S. has seen rapid growth in the fireworks industry in the past decade and a half. In 2015, Americans purchased 285.3 million pounds of fireworks, according to the American Pyrotechnics Association, spending $1.09 billion.



Whether drone-focused light shows will prove to be more cost-efficient is a bigger question. The devices would only have to be purchased once, but would likely cost much more than a standard small-scale fireworks show.
Small-town holiday fireworks displays typically cost about $2,000 to $7,000 for a basic show, according to Premier Pyrotechnics, while the city of Houston spent an estimated $100,000 on its 2016 Fourth of July fireworks show, according to Houston Business Journal. On a grander scale, estimates suggest Macy’s Inc. M, -0.11%   may spend $6 million on its annual Fourth of July fireworks show.
Intel’s drones are not publicly for sale, and the chip maker would not disclose how much they would cost. For now, the drones are proof of the ability to automate multiple drone flights at once, using software that could be adapted to commercial applications like mapping or inspections.


Intel’s Shooting Star drone

“We want to showcase that drones can be used for something different,” said Natalie Cheung, Intel’s product manager for unmanned aerial vehicles.
The last time Intel used drones for entertainment was in 2015, when 100 drones were each manually pre-programmed to take their positions in the sky, which Cheung admitted was not practical. Now, users can enter images into a software program, and an algorithm determines the path the drones need to fly to create the image.
“I can’t imagine how you would manually place 500 drones in the air for a five-minute show,” Cheung said. “And while 100 is amazing, 500 is breathtaking.”
Intel already manufactures its own commercial-grade drone called the Falcon 8+, and earlier this week announced it had acquired flight-planning software startup MAVinci for an undisclosed price. It has also invested $60 million in consumer drone company Yuneec.