KREUZADER (Posts tagged artificial intelligence)

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native“The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now...

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native

The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!)

To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs.

While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches.

Source: medium.com
machine learning neural networking silicon valley artificial intelligence tensorflow
MultiModel: Multi-Task Machine Learning Across Domains  “Over the last decade, the application and performance of Deep Learning has progressed at an astonishing rate. However, the current state of the field is that the neural network architectures...

MultiModel: Multi-Task Machine Learning Across Domains

Over the last decade, the application and performance of Deep Learning has progressed at an astonishing rate. However, the current state of the field is that the neural network architectures are highly specialized to specific domains of application. An important question remains unanswered: Will a convergence between these domains facilitate a unified model capable of performing well across multiple domains?

Today, we present MultiModel, a neural network architecture that draws from the success of vision, language and audio networks to simultaneously solve a number of problems spanning multiple domains, including image recognition, translation and speech recognition. While strides have been made in this direction before, namely in Google’s Multilingual Neural Machine Translation System used in Google Translate, MultiModel is a first step towards the convergence of vision, audio and language understanding into a single network.

Source: research.googleblog.com
neural networking google artificial intelligence machine learning
Using Machine Learning to Explore Neural Network Architecture “ At Google, we have successfully applied deep learning models to many applications, from image recognition to speech recognition to machine translation. Typically, our machine learning...

Using Machine Learning to Explore Neural Network Architecture

At Google, we have successfully applied deep learning models to many applications, from image recognition to speech recognition to machine translation. Typically, our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise.

To make this process of designing machine learning models much more accessible, we’ve been exploring ways to automate the design of machine learning models. Among many algorithms we’ve studied, evolutionary algorithms [1] and reinforcement learning algorithms [2] have shown great promise. But in this blog post, we’ll focus on our reinforcement learning approach and the early results we’ve gotten so far. In our approach (which we call “AutoML”), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly. 

Source: research.googleblog.com
neural networks neural networking artificial intelligence machine learning google
An Artificial Intelligence Developed Its Own Non-Human Language“A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.
In the report, researchers at the Facebook...

An Artificial Intelligence Developed Its Own Non-Human Language

A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language.

Source: The Atlantic
facebook language artificial intelligence
Deal or No Deal? End-to-End Learning for Negotiation Dialogues“This is a PyTorch implementation of research paper Deal or No Deal? End-to-End Learning for Negotiation Dialogues developed by Facebook AI Research.
The code trains neural networks to...

Deal or No Deal? End-to-End Learning for Negotiation Dialogues

This is a PyTorch implementation of research paper Deal or No Deal? End-to-End Learning for Negotiation Dialogues developed by Facebook AI Research.

The code trains neural networks to hold negotiations in natural language, and allows reinforcement learning self play and rollout-based planning.

Source: github.com
neural networking artificial intelligence reinforcement learning
A neural approach to relational reasoning“In two new papers, we explore the ability for deep neural networks to perform complicated relational reasoning with unstructured data. In the first paper - A simple neural network module for relational...

A neural approach to relational reasoning

In two new papers, we explore the ability for deep neural networks to perform complicated relational reasoning with unstructured data. In the first paper - A simple neural network module for relational reasoning - we describe a Relation Network (RN) and show that it can perform at superhuman levels on a challenging task. While in the second paper -  Visual Interaction Networks  - we describe a general purpose model that can predict the future state of a physical object based purely on visual observations.

Source: deepmind.com
neural networking neural networks artificial intelligence deepmind
AlphaGo, in context“AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS)....

AlphaGo, in context

AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). However, the way these components are combined is novel and not exactly standard. In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. In addition, the policy/value nets are deep neural networks, so getting everything to work properly presents its own unique challenges (e.g. value function is trained in a tricky way to prevent overfitting). On all of these aspects, DeepMind has executed very well. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems.

Source: medium.com
alphago google brain alphabet artificial intelligence machine learning deep learning neural networking
Taking his last stand against AI, humanity’s best Go player says the robots have already won“Earlier today, AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, defeated Ke Jie, the world’s reigning top-ranked player of...

Taking his last stand against AI, humanity’s best Go player says the robots have already won

Earlier today, AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, defeated Ke Jie, the world’s reigning top-ranked player of the board game Go. The AI won by half a point, the smallest margin possible. Two more games are slated for May 25 and 27, as part of Google’s AI and Go summit being held in an eastern coastal town in China, Wuzhen.

[…]

But the odds for humans aren’t looking good, according to Ke Jie himself. On the eve of his first game against AlphaGo, Ke took to China’s Twitter-like Weibo to announce (link in Chinese) that the three games this week will be the last match he’ll play against robots.

“I believe the future belongs to AI,” he said.

But Ke added that he won’t go down without a fight as he takes a last stand against robots. What makes him better than his AI opponent? The answer, he says, is passion.

Here’s the full text of his letter, titled “The Last Battle,” translated [link]

Source: qz.com
go alphago google brain artificial intelligence deep learning neural networking

We present a real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network. In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control. The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments. Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings.

Source: theorangeduck.com
neural networking artificial intelligence machine learning