KREUZADER (Posts tagged deep learning)

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
AlphaGo, in context“AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS)....

AlphaGo, in context

AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). However, the way these components are combined is novel and not exactly standard. In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. In addition, the policy/value nets are deep neural networks, so getting everything to work properly presents its own unique challenges (e.g. value function is trained in a tricky way to prevent overfitting). On all of these aspects, DeepMind has executed very well. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems.

Source: medium.com
alphago google brain alphabet artificial intelligence machine learning deep learning neural networking
Meet the Most Nimble-Fingered Robot Yet“Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is...

Meet the Most Nimble-Fingered Robot Yet

Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is that the robot got so good at grasping by working with virtual objects.

The robot learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

The bot is significantly better than anything developed previously. In tests, when it was more than 50 percent confident it could grasp an object, it succeeded in lifting the item and shaking it without dropping the object 98 percent of the time. When the robot was unsure, it would poke the object in order to figure out a better grasp. After doing that it was successful at lifting it 99 percent of the time. This is a significant step up from previous methods, the researchers say.

Source: technologyreview.com
robot neural networking deep learning machine vision
Taking his last stand against AI, humanity’s best Go player says the robots have already won“Earlier today, AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, defeated Ke Jie, the world’s reigning top-ranked player of...

Taking his last stand against AI, humanity’s best Go player says the robots have already won

Earlier today, AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, defeated Ke Jie, the world’s reigning top-ranked player of the board game Go. The AI won by half a point, the smallest margin possible. Two more games are slated for May 25 and 27, as part of Google’s AI and Go summit being held in an eastern coastal town in China, Wuzhen.

[…]

But the odds for humans aren’t looking good, according to Ke Jie himself. On the eve of his first game against AlphaGo, Ke took to China’s Twitter-like Weibo to announce (link in Chinese) that the three games this week will be the last match he’ll play against robots.

“I believe the future belongs to AI,” he said.

But Ke added that he won’t go down without a fight as he takes a last stand against robots. What makes him better than his AI opponent? The answer, he says, is passion.

Here’s the full text of his letter, titled “The Last Battle,” translated [link]

Source: qz.com
go alphago google brain artificial intelligence deep learning neural networking
Mind-Reading Algorithms Reconstruct What You’re Seeing Using Brain-Scan Data“The difficulty, of course, is finding ways to efficiently process the data from functional magnetic resonance imaging (fMRI) scans. The task is to map the activity in...

Mind-Reading Algorithms Reconstruct What You’re Seeing Using Brain-Scan Data

The difficulty, of course, is finding ways to efficiently process the data from functional magnetic resonance imaging (fMRI) scans. The task is to map the activity in three-dimensional voxels inside the brain to two-dimensional pixels in an image.

That turns out to be hard. fMRI scans are famously noisy, and the activity in one voxel is well known to be influenced by activity in other voxels. This kind of correlation is computationally expensive to deal with; indeed, most approaches simply ignore it. And that significantly reduces the quality of the image reconstructions they produce.  

So an important goal is to find better ways to crunch the data from fMRI scans and so produce more accurate brain-image reconstructions.

Today, Changde Du at the Research Center for Brain-Inspired Intelligence in Beijing, China, and a couple of pals, say they have developed just such a technique. Their trick is to process the data using deep-learning techniques that handle nonlinear correlations between voxels more capably.

Source: technologyreview.com
neural networking deep learning neuroscience brain
Deep learning-based artificial vision for grasp classification in myoelectric hands“Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable...

Deep learning-based artificial vision for grasp classification in myoelectric hands

Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand.

Source: iopscience.iop.org
bionics deep learning

Lyrebird will offer an API to copy the voice of anyone. It will need as little as one minute of audio recording of a speaker to compute a unique key defining her/his voice. This key will then allow to generate anything from its corresponding voice. The API will be robust enough to learn from noisy recordings. The following sample illustrates this feature, the samples are not cherry-picked.
Please note that those are artificial voices and they do not convey the opinions of Donald Trump, Barack Obama and Hillary Clinton.

Source: lyrebird.ai
lyrebird deep learning artificial intelligence neural networking
Enabling Continual Learning in Neural Networks“Deep neural networks are currently the most successful machine learning technique for solving a variety of tasks including language translation, image classification and image generation. However, they...

Enabling Continual Learning in Neural Networks

Deep neural networks are currently the most successful machine learning technique for solving a variety of tasks including language translation, image classification and image generation. However, they have typically been designed to learn multiple tasks only if the data is presented all at once. As a network trains on a particular task its parameters are adapted to solve the task. When a new task is introduced,  new adaptations overwrite the knowledge that the neural network had previously acquired. This phenomenon is known in cognitive science as ‘catastrophic forgetting’, and is considered one of the fundamental limitations of neural networks.

[…]

A neural network consists of several connections in much the same way as a brain. After learning a task, we compute how important each connection is to that task. When we learn a new task, each connection is protected from modification by an amount proportional to its importance to the old tasks. Thus it is possible to learn the new task without overwriting what has been learnt in the previous task and without incurring a significant computational cost. In mathematical terms, we can think of the protection we attach to each connection in a new task as being linked to the old protection value by a spring, whose stiffness is proportional to the connection’s importance. For this reason, we called our algorithm Elastic Weight Consolidation (EWC).

Source: deepmind.com
deepmind neural networking artificial intelligence deep learning machine learning
“Baidu Research presents Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. The biggest obstacle to building such a system thus far has been the speed of audio synthesis – previous approaches have...

Baidu Research presents Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. The biggest obstacle to building such a system thus far has been the speed of audio synthesis – previous approaches have taken minutes or hours to generate only a few seconds of speech. We solve this challenge and show that we can do audio synthesis in real-time, which amounts to an up to 400X speedup over previous WaveNet inference implementations.

Synthesizing artificial human speech from text, commonly known as text-to-speech (TTS), is an essential component in many applications such as speech-enabled devices, navigation systems, and accessibility for the visually-impaired. Fundamentally, it allows human-technology interaction without requiring visual interfaces.

Source: research.baidu.com
baidu speech synthesis deep learning
Beating the World’s Best at Super Smash Bros. with Deep Reinforcement Learning“ There has been a recent explosion in the capabilities of game-playing artificial intelligence. Many classes of RL tasks, from Atari games to motor control to board games,...

Beating the World’s Best at Super Smash Bros. with Deep Reinforcement  Learning

There has been a recent explosion in the capabilities of game-playing artificial intelligence. Many classes of RL tasks, from Atari games to motor control to board games, are now solvable by fairly generic algorithms, based on deep learning, that learn to play from experience with minimal knowledge of the specific domain of interest. In this work, we will investigate the performance of these methods on Super Smash Bros. Melee (SSBM), a popular console fighting game. The SSBM environment has complex dynamics and partial observability, making it challenging for human and machine alike. The multi-player aspect poses an additional challenge, as the vast majority of recent advances in RL have focused on single-agent environments. Nonetheless, we will show that it is possible to train agents that are competitive against and even surpass human professionals, a new result for the multi-player video game setting.
Source: arxiv.org
artificial intelligence deep learning reinforcement learning super smash bros
“Think about the last post you liked — it most likely involved a photo or video. But, until recently, online search has always been a text-driven technology, even when searching through images. Whether an image was discoverable was dependent on...

Think about the last post you liked — it most likely involved a photo or video. But, until recently, online search has always been a text-driven technology, even when searching through images. Whether an image was discoverable was dependent on whether it was sufficiently tagged or had the right caption — until now.

That’s changing because we’ve [Facebook] pushed computer vision to the next stage with the goal of understanding images at the pixel level. This helps our systems do things like recognize what’s in an image, what type of scene it is, if it’s a well-known landmark, and so on. This, in turn, helps us better describe photos for the visually impaired and provide better search results for posts with images and videos.

Source: code.facebook.com
facebook computer vision artificial intelligence neural networking deep learning machine learning
Wearable AI system can detect a conversation’s tone“It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely...

Wearable AI system can detect a conversation’s tone

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

Source: news.mit.edu
artificial intelligence deep learning
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker“ Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that...

DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker

Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from self-play games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold'em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.

Source: arxiv.org
poker artificial intelligence neural networking deep learning machine learning
Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records
“Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that...

Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records

Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.

Source: nature.com
deep learning machine learning artificial intelligence neural networking medicine
The major advancements in Deep Learning in 2016“Deep Learning has been the core topic in the Machine Learning community the last couple of years and 2016 was not the exception. In this article, we will go through the advancements we think have...

The major advancements in Deep Learning in 2016

Deep Learning has been the core topic in the Machine Learning community the last couple of years and 2016 was not the exception. In this article, we will go through the advancements we think have contributed the most (or have the potential) to move the field forward and how organizations and the community are making sure that these powerful technologies are going to be used in a way that is beneficial for all.

Source: tryolabs.com
deep learning machine learning neural networking artificial intelligence
“In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human....

In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.

[…]

The CMU work builds on previous research by Google, OpenAI, and Pennsylvania State University that has found systematic flaws with the way deep neural networks are trained. By exploiting these vulnerabilities with purposefully malicious data called adversarial examples, like the image printed on the glasses in this CMU work, researchers have consistently been able to force AI to make decisions it wouldn’t otherwise make.

Source: qz.com
deep learning neural networking facial recognition