Wayve joins Britain’s most-funded driverless car startups

Existing Wayve investors, such as Balderton Capital and Firstminute Capital, are also said to be interested in putting more money into the startup.

Cambridge-based firm Wayve is joining the ranks of Britain’s most-funded driverless car startups.

Wayve was launched by Cambridge PhDs Amar Shah and Alex Kendall. The company’s technology aims to enable driverless cars to navigate even on unknown roads.

The startup is now set to secure around $20 million in funding from investors which include Lastminute.com founder Brent Hoberman and Silicon Valley fund Eclipse.

Lior Susan, Founder of Eclipse, is leading the funding round according to Companies House filings. Susan will take a board seat at Wayve following the round.

Existing Wayve investors, such as Balderton Capital and Firstminute Capital, are also said to be interested in putting more money into the startup.

Wayve clearly has significant interest and it’s not hard to see why – the startup combines powerful technology with several of the current buzzwords attractive to investors.

Rival solutions like Google’s Waymo use lasers for driverless navigation. Wayve argues the use of lasers is expensive and instead it pairs machine learning algorithms with standard cameras.

Shah has previously said Wayve aims to build driverless cars with smarter “brains” than competitors like Google and Uber through better artificial intelligence.

Wayve has already been testing driverless vehicles on the streets of Cambridge since last year.

The main vehicle used by Wayve for its tests is Renault’s compact two-seater Twizy. Wayve has also tested its technology on the much-larger Jaguar I-PACE SUV.

Driverless car firms in Britain, such as Five AI and Oxbotica, continue to attract some of the most investment in Europe -– but they still fall somewhat behind the billions pumped into US companies.

https://www.iottechexpo.com/wp-content/uploads/2018/09/iot-tech-expo-world-series.pngInterested in hearing industry leaders discuss subjects like this? Attend the IoT Tech Expo World Series events with upcoming shows in Silicon Valley, London, and Amsterdam.

Related Stories

Leave a comment

log in

Alternatively

This will only be used to quickly provide signup information and will notallow us to post to your account or appear on your timeline.

Related Posts:

  • No Related Posts

Debunking the myths of driverless cars

This demonstrates that emerging technologies are being adopted too quickly without manufacturers fully considering the accompanying security …

The Department for Transport claimed it wants to see fully autonomous cars tested on UK roads by 2021. Astonishingly, this expectation was set out after several fatal crashes in Arizona, outlining a very real danger to pedestrians and passengers as a possible consequence of this technology.

The system and infrastructure used in cars today (known as a Controller Area Network) was designed back in the 80s. It was developed for exchanging information between different micro controllers. Essentially what we have is a peer-to-peer network – and an old one at that.

The main issue here is these networks weren’t built with security in mind, as it was not a key concern at the time. As time has gone on, modern-day functionality has been layered onto the existing CAN infrastructure. This gives it no access control or security features, but instead, leaves access to cars potentially open to criminals.

While no real-world hacks have been executed this way, it’s been proven possible; in 2015, two researchers were able to drive a Jeep Cherokee off the road using wireless technology. As a result of this flaw, half a million cars were recalled.

This demonstrates that emerging technologies are being adopted too quickly without manufacturers fully considering the accompanying security implications.

Discomfort with autonomy

Following the fatality in Arizona last year, it was predicted it would be many years until autonomous cars replace human drivers. Realistically, I don’t think driverless cars will or should ever replace human drivers in the way we imagine; where nearly everyone will continue to drive a private car, but it will be self-driving. The issue of how we implement the technology is for society to decide – whether this takes the form of private vehicles, or a co-ordinated public transport system – but I don’t believe either should remove the human aspect of vehicles.

People are becoming more apprehensive when it comes to driverless cars, where safety is paramount, and rightly so. Historically, driving has always been an aspect of life where human control has been essential, so the idea of watching a film, or sleeping, while a car transports us, feels understandably ‘wrong’ to many people.

There are various levels of autonomy with self-driving cars – ranging from add-on features such as parking assistance through to completely driverless cars. A ‘grey area’ lies between the two, where the driver has very little to do, but has responsibility for the vehicle and might need to take control at some point. In the latter scenario there’s a danger the driver may switch off because they aren’t compelled to be in full control, and might therefore be unable to regain control in an emergency.

Image Credit: Qualcomm

Image Credit: Qualcomm

Beyond the safety implications

There are safety and ethical issues to consider. Christian Wolmar raised the issue of ‘the Holborn problem’. If driverless cars are automatically stop upon sensing a pedestrian, what happens when they are confronted with a mass of people milling across a busy road? Will they wait all day? Or will we be asked to accept a lower safety bar? Or if the car is given the chance to choose to avoid hurting pedestrians or the passenger in the car in the lead up to an accident, how and who will it choose? A car isn’t able to make moral-based decisions on its own.

Ethics aside, in terms of cybersecurity, it is important to remember that noting can be 100% secure. Just like housework, security is never ‘done’ – you need to continually repeat the process of vacuuming and dusting as the dirt will be back next week. This same logic applies to securing the increasingly advanced technology in modern cars. There are still many unanswered questions and unconsidered scenarios, which we need to ascertain before we can even start to consider loosening the reigns on bringing autonomous cars to our roads.

David Emm, Principal Security Researcher at Kaspersky Lab

  • Driverless cars may still be a ways off which is why we’ve highlighted the best dash cams

Related Posts:

  • No Related Posts

Microsoft and MIT develop AI to fix driverless car ‘blind spots’

Artificial Intelligence News. AI News … Related Items:ai, artificial intelligence, blind spots, driverless car, Featured, microsoft, mit, self driving car …

Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors.

Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task.

The AI developed by Microsoft and MIT compares the action taken by humans in a given scenario to what the driverless car’s own AI would do. Where the human decision is more optimal, the vehicle’s behaviour is updated for similar future occurrences.

Ramya Ramakrishnan, an author of the report, says:

“The model helps autonomous systems better know what they don’t know.

Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents.

The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

For example, if an emergency vehicle is approaching then a human driver should know to let them pass if safe to do so. These situations can get complex dependent on the surroundings.

On a country road, allowing the vehicle to pass could mean edging onto the grass. The last thing you, or the emergency services, want a driverless car to do is to handle all country roads the same and swerve off a cliff edge.

Humans can either ‘demonstrate’ the correct approach in the real world, or ‘correct’ by sitting at the wheel and taking over if the car’s actions are incorrect. A list of situations is compiled along with labels whether its actions were deemed acceptable or unacceptable.

The researchers have ensured a driverless car AI does not see its action as 100 percent safe even if the result has been so far. Using the Dawid-Skene machine learning algorithm, the AI uses probability calculations to spot patterns and determine if something is truly safe or still leaves the potential for error.

We’re yet to reach a point where the technology is ready for deployment. Thus far, the scientists have only tested it with video games. It offers a lot of promise, however, to help ensure driverless car AIs can one day safely respond to all situations.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Related Posts:

  • No Related Posts

Google’s StarCraft II victory shows AI improves via diversity, invention, not reflexes

Artificial intelligence will become the next new human right · As the PC market continues to shrink slowly, guess which OEMs are still thriving?

How well machines do against humans in competitive situations may not be the typical things you’d expect, such as response time, but rather the ability to maximize good choices through long experience.

That’s one of the takeaways from the Dec. 19 match-up in the real-time strategy computer game StarCraft II between a computer, AlphaStar, developed by Google, against a human champion, Poland’s Grzegorz Komincz, known by his gamer handle MaNa.

A blog post by the AlphaStar team Thursday reveals some fascinating insights about how that December triumph was created. (Research paper is in process.)

AlphaStar came back from many losses in 2017 to roundly trounce MaNa by five games to zero in the December match. “The first system to beat a top [human] pro,” as AlphaStar’s creators tweeted on Thursday.

Also: China’s AI scientists teach a neural net to train itself

screenshot-width-1500.png

Screent capture of AlphaStar playing against the human Team Liquid.

(Image: Google DeepMind/Blizzard Entertainment)

The critical difference may be a strategy of training AlphaStar that employed new “meta-game” techniques for cultivating a master player.

The machine is not faster than humans at taking actions. In fact, its average number of actions in StarCraft II is 280 per minute, “significantly lower than the professional [human] players.”

Instead, its strength seems to be coming up with novel strategies or unusual twists on existing strategies by amassing knowledge over many games. Google’s DeepMind team used a novel “meta-game” approach to train their network, building up a league of players over thousands and thousands of simultaneous training matches, and then selecting the optimal player from the results of each.

Also: MIT lets AI “synthesize” computer programs to aid data scientists

StarCraft II, the latest in the StarCraft franchise from Santa Monica-based video game maker Activision-Blizzard, requires players to martial workers who move through a two-dimensional terrain, gathering resources such as minerals, constructing buildings, and assembling armies, to achieve dominance against other players. The game first came out in 1998 and has been a tournament game ever since.

It’s been a hotbed of AI innovation, because Google and others see in the game several factors that make it much more challenging than other video games, and classic strategy games such as Chess or Go. These include the fact StarCraft has a “fog of war” aspect, in that each players, including the AI “agents” being developed, have limited information because they can not see aspects of the terrain where their opponents may have made progress.

In 2017, when Google’s DeepMind unit, and programers at Blizzard published their initial work, they wrote that they were able to get their algorithms to play the game “close to expert human play” but that they couldn’t even teach it to beat the built-in AI that ships with StarCraft.

screen-capture-alphastar-thought-process-2019.jpgscreen-capture-alphastar-thought-process-2019.jpg

A screen capture of how the AlphaStar model is reflecting on the game, which parts of the neural network are firing at moments in time, strategies it is considering.

(Image: Google DeepMind/Blizzard Entertainment)

The team licked their wounds and came back with several innovations this time around. A paper is going to be published soon, according to DeepMind founder and CEO Demis Hassabis.

At its core, AlphaStar, like the 2017 version, still is based on a deep learning approach made of what are known as recurrent neural networks, or RNNs, which maintain a sort of memory of previous inputs, which allows them to build upon knowledge amassed over the course of training the neural network.

The authors, however, augmented the typical “long short-term memory,” or LSTM, neural network with something called a “transformer,” developed by Google’s Ashish Vaswani and colleagues in 2017. It is able to move a “read head” over different parts of a neural network to retrieve prior data selectively. There are a whole bunch of new things like this.

But one of the most provocative ways the game plan has changed is incorporating an approach to culling the best players, called “Nash averaging,” introduced last year by David Balduzzi and colleagues at DeepMind. The authors observed that neural networks have a lot of “redundancy,” meaning, “different agents, networks, algorithms, environments and tasks that do basically the same job.” Because of that, the Nash average is able to kind of selectively rule out, or “ablate,” the redundancies to reveal fundamental underlying advantages of a particular AI “agent” that plays a video game (or does any task).

scii-blogpost-fig08-width-1500.pngscii-blogpost-fig08-width-1500.png

A graphic of the Nash averaging process by which the ideal player is constructed. “The final AlphaStar agent consists of the components of the Nash distribution — in other words, the most effective mixture of strategies that have been discovered.”

(Image: Google DeepMind/Blizzard Entertainment)

As Balduzzi and colleagues wrote in their paper, “Nash evaluation computes a distribution on players (agents, or agents and tasks) that automatically adjusts to redundant data. It thus provides an invariant approach to measuring agent-agent and agent-environment interactions.”

Nash averaging was used to pick out the best of AlphaStar’s players over the span of many games. As the AlphaStar team write, “A continuous league was created, with the agents of the league – competitors – playing games against each other […] While some new competitors execute a strategy that is merely a refinement of a previous strategy, others discover drastically new strategies.”

But it’s not just electing one player who shines, the Nash process is effectively crafting a single player that fuses all the learning and insight of the others. The final AlphaStar agent consists of the components of the Nash distribution — in other words, the most effective mixture of strategies that have been discovered.”


Must read


Key is that the training of all these competitors affords each AI agent unique goals and objectives, so that the number of possible solutions to the game explored expands steadily. It’s a kind of survival of the fittest of video games, with the players that go up against humans benefitting from rapid evolution in the months of game play.

In echoes of what happened with Go, where DeepMind’s AlphaGo was able to invent totally novel strategies, champ MaNa is quoted as saying, “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected.”

It will be interesting to see, when the paper comes out, whether, as Hassabis and colleagues promise, this mash up of various machine learning techniques produces dividends in other fields of research. As they write in the post, “We believe that this advanced model will help with many other challenges in machine learning research that involve long-term sequence modelling and large output spaces such as translation, language modelling and visual representations.”

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related Posts:

  • No Related Posts

3 Top Driverless Car Stocks to Watch in January

Anders Bylund (NXP Semiconductors): Automotive computing giant NXP Semiconductors is navigating some stormy seas at the moment.

Discussion of driverless cars is becoming more commonplace every day. And though the industry still faces plenty of hurdles — from both legislative and safety perspectives — before it can truly displace today’s human-driven model, early investors stand to be handsomely rewarded from the trend.

But finding the most promising self-driving-vehicle stocks is easier said than done. So we asked three top Motley Fool contributors to each choose a driverless-car stock they believe you should be watching at the start of 2019. Here’s why they chose Alphabet(NASDAQ:GOOG)(NASDAQ:GOOGL), NXP Semiconductors(NASDAQ:NXPI), and General Motors(NYSE:GM).

A driverless Jaguar with the Waymo logo

IMAGE SOURCE: WAYMO.

Alphabet’s Waymo is quickly going mainstream

Steve Symington (Alphabet): Earlier this week, Waymo, the autonomous-vehicle subsidiary of Google parent Alphabet, announced plans to retrofit a 200,000-square-foot facility in Michigan, effectively creating “the world’s first factory 100% dedicated to mass production of L4 [level 4] autonomous vehicles.”

Of course, Michigan workers can be happy the plant will create up to 400 jobs in the area, from engineers to operations personnel to fleet coordinators. But most exciting for investors is that this appears to be a sign Waymo is taking more control over its production capabilities, as it works to commercialize its business.

The news comes hot on the heels of Waymo launching its first self-driving taxi service in Arizona last month. Assuming it can continue to scale its business and hone its self-driving-vehicle technology, some analysts already believe Waymo is poised to become a $100 billion business over the next decade — a hefty chunk of incremental change even for Alphabet, given that its market cap is currently around $750 billion. So even putting aside the incredible business Alphabet has already built through its core Google operations, I think investors who bet on the company as a leader in the driverless-car space could enjoy massive gains in the coming years.

Don’t call it a comeback — NXP never left

Anders Bylund (NXP Semiconductors): Automotive computing giant NXP Semiconductors is navigating some stormy seas at the moment. The proposed merger with larger chipmaking peer Qualcomm(NASDAQ:QCOM)fell apart, and car sales in China have been slow for a few months. You could argue that these two headwinds are related, since international trade disputes arguably triggered both of them.

Investors sent NXP’s stock straight to the bargain bin when the Qualcomm deal failed, and that hole has only been dug deeper over the last six months.

But I think that’s a big mistake. NXP isn’t going away, and is probably worth every penny of the $44 billion that Qualcomm had been prepared to pay for it. Trading at a current market cap of $23 billion, with $25 billion of enterprise value, it’s a steal in my book.

Despite political headwinds and unpredictable currency exchange trends, NXP is delivering modest revenue growth these days. According to a Strategy Analytics report quoted in NXP’s recent investor-day presentation, the company is neck and neck with Japanese rival Renesas in the automotive-chip market; together, these two companies hold a 59% share of that market. And car processors accounted for 40% of NXP’s total sales in the third quarter.

Most of that revenue may come from infotainment systems and engine controllers right now, but that will change quickly. Here’s how NXP’s CEO Rick Clemmer explained this at an industry conference in November:

If you look at automotive, our position, we’re really focused on autonomous driving. We’re designed in on all of the 10 top auto companies’ radar platform[s] that will be deployed. We look at autonomous driving through Level 3 as being the significant growth driver over the next five to seven years.

A fully autonomous car needs at least $900 of computing components aboard, in order to read the traffic conditions and use them to deliver a safe and efficient self-driving experience. Level 3 self-driving cars can get away with $600 of these sensors and processors, and current Level 1 or 2 assisted-driving vehicles don’t need more than $150 of computing tools. If self-driving cars are going mainstream in the near future and NXP can hold on to its massive market share, we’re looking at a huge long-term growth opportunity.

Like I said, NXP’s stock is incredibly cheap right now, and that’s a big mistake. I’d recommend taking advantage of NXP’s low share prices, because the sale won’t last forever.

An oldie but a goodie

Chris Neiger (General Motors): Technology companies get most of the attention for what they’re doing in this autonomous-vehicle (AV) market, but investors would be wise not to overlook GM’s plans for self-driving cars.

The automotive stalwart began its AV push in 2016 when GM purchased Cruise Automation, a company that makes self-driving-vehicle software. Since then, Cruise has grown from a few dozen employees to more than 1,000. Cruise Automation says it’ll start mass-producing a fully self-driving vehicle — sans steering wheel and pedals — sometime this year. Additionally, GM is working on an AV ride-hailing service that’s expected to launch later this year.

But GM isn’t going it alone in the AV market. Late last year the company forged a new partnership with Honda(NYSE:HMC) to codevelop an entirely new self-driving car from the ground up. Honda said it will invest $2.75 billion in GM’s Cruise over the next 12 years. GM has also received funding for its AV ambitions from Softbank Group, which invested $2.25 billion in Cruise last year.

Aside from the investments, GM has also indicated that AVs would play a more significant part in the company when it announced some restructuring and layoffs at the end of 2018. The company said among its new changes would be to allocate more resources to electric and autonomous vehicles.

GM’s Cruise was valued at about $14.6 billion after Honda’s investment just a few months ago. And with GM poised to launch an AV service this year and the company already working on a fully self-driving vehicle, GM’s Cruise is well-positioned to grow its value even more and expand its lead in the nascent self-driving-vehicle market.

Strap in and enjoy the ride

We’re still in the early stages for driverless cars, so none of us can guarantee that these three stocks will go on to beat the market. But given their central roles and industry leadership today, we believe Alphabet, NXP, and GM are on the road to doing exactly that. And we think patient investors who put their money to work accordingly will be more than pleased with the decision.

Related Posts:

  • No Related Posts