Democrats dominate artificial intelligence commission

The National Security Commission on Artificial Intelligence (AI), a federally appointed commission, held its first meeting Monday chaired by former …

A new federal commission on artificial intelligence is being led by Democrats.

The National Security Commission on Artificial Intelligence (AI), a federally appointed commission, held its first meeting Monday chaired by former Google executive and billionaire Eric Schmidt, a major donor and informal adviser to former President Barack Obama.

The commission’s vice chairman is Robert Work, who was deputy defense secretary in the Obama administration.

Additionally, the AI commission has hired as a staff member Ylli Bajraktari, a former National Security Council staff member under Mr. Obama. Mr. Bajraktari also was a former aide to Obama Defense Secretary Ash Carter. He and his brother, Ylber Bajraktari, raised concerns about political reliability from some conservatives who questioned why they were kept on in senior positions at the NSC by President Trump’s second national security adviser, retired Lt. Gen. H.R. McMaster.

A Pentagon spokeswoman had no immediate comment.

Mr. Trump’s new defense budget is seeking $927 million for the Joint Artificial Intelligence Center and an advanced image recognition capability. The Pentagon said in a statement that the commission was set up under last year’s defense authorization act.

Its mission is to review and advise the federal government on artificial intelligence, machine learning and other associated technologies and issues related to national security, defense, public-private partnerships and investments.

“I’m honored to lead this talented group of commissioners as we take on this important effort,” said Mr. Schmidt. “We have a tremendous opportunity to help our government understand the state of artificial intelligence and offer ideas on how to harness this transformative technology to benefit both our economic and national security interests.”

The 15 members of the commission were briefed during the Monday meeting on AI efforts by the Pentagon, the Commerce Department, the intelligence community and members of Congress.

The statement said commissioners were appointed by the secretaries of defense and commerce and Republicans and Democrats in Congress.

Mr. Schmidt is listed as a technical adviser to Alphabet, Google’s parent company.

Google came under fire from Vice President Mike Pence in October for the company’s work with China in developing a censored search engine for the Chinese government that would allow blocking search terms such as “Tiananmen” — the Beijing square where hundreds of pro-democracy protesters were massacred by Chinese troops in 1989.

Google executives later said the search engine, called Dragonfly, is a research project and would not be sold to the Chinese.

However, insiders from Google recently reported that work on the Chinese censorship software was continuing.

BILLIONS FOR HYPERSONIC MISSILES

The Trump administration’s defense budget for fiscal 2020 includes a request for $2.6 billion for developing hypersonic missiles.

The money would be spent on developing maneuvering weapons capable of traveling faster that 7,000 miles per hour and that could be used to defeat increasingly sophisticated missile defenses by Russia, China and other states. Air Force Maj. Gen. John M. Pletcher, deputy assistant Air Force secretary for budget, said the funds would be used to speed up development of a U.S. hypersonic missile.

“The FY ‘20 budget continues the funding to accelerate hypersonics development, keeping us on a path to build and fly our nation’s first hypersonic boost glide weapon five years earlier than anticipated,” he said.

Boost glide vehicles are launched on ballistic missiles and then glide and maneuver to targets at altitudes just below space.

Another Air Force official, Carolyn M. Gleason, said the service is investing $576 million for hypersonic weapons, including an air-launched rapid response weapon and a hypersonic conventional strike weapon. Both are slated to be operational in late 2022.

The hypersonic missile will be used for the Pentagon’s conventional prompt global strike capability, which can attack any target on Earth in less than 30 minutes.

Ballistic missiles normally reach targets in 30 minutes. Hypersonic missiles, because of their incredible speed, can hit targets in 15 minutes or less, depending on the launch location.

The Pentagon plans to use hypersonic missiles armed with conventional warheads in response to a Chinese or Russian attack on U.S. satellites, or to blow up nuclear materials of a rogue state like North Korea or Iran. The missiles also could strike terrorist groups that acquire nuclear or other weapons of mass destruction.

Another potential use for hypersonic rapid strikes is hitting a gathering of terrorist leaders in a neutral country.

AIR FORCE ON PLA SPACE WAR

China is building up military forces for space warfare and has set up a special branch of the military for fighting in space, according to an Air Force study.

The space corps within the PLA Strategic Support Force is a significant part of China’s growing advanced military capabilities.

“Although espousing a policy of the peaceful use of outer space, China nevertheless is actively developing a diverse set of military capabilities in this domain,” said the 2017 report by the Air Force’s China Aerospace Studies Institute. The institute is part of the Air University in Alabama.

Chinese space arms would be used in a conflict to “disrupt or cripple the ability of adversary forces to use assets in space,” the report notes.

The support force, created in December 2015, is in charge of China’s four launch centers.

According to the report, China’s space warfare capabilities include several types of weaponry.

“In an outer space context, this capability, broadly known as counter-space, spans a vast range of both kinetic and non-kinetic capabilities,” the report said.

Kinetic operations destroy satellites and create debris, while non-kinetic strikes temporarily disable or blind satellites.

China’s 2007 anti-satellite missile test used a converted medium-range ballistic missile to blast a weather satellite, creating the largest man-made space debris field in history, with more than 3,400 pieces of floating metal, the report said.

Other tests took place in 2013, including an ASAT missile launch into nearly geosynchronous orbit. The report said the test demonstrated China’s ability to “threaten U.S. global position system (GPS) and other types of satellites.”

The PLA also is building co-orbital attack systems using spacecraft that can move within proximity of a space target. “For example, a Chinese spacecraft could ram into an enemy satellite or detonate near it,” the report said.

“China is also interested in operationalizing robotic arm technology, possibly to ‘grapple’ opposing platforms in order to disable them without creating debris — a capability the PLA apparently tested in August 2013.

“Co-orbital spacecraft can also engage in non-kinetic ‘blinding’ operations,” the report said. “For example, these spacecraft could employ ‘umbrellas’ or ‘spray paint’ to block the view of an adversary’s sensors.”

Electronic jamming of satellites, such as jamming GPS signals to thwart precision targeting, also could be employed. The Air Force believes China will seek even more advanced space weapons in the future.

The first step is to deploy in space sophisticated sensors to support leaders and war planners, such as the GPS-clone satellite navigation system Beidou.

“Moreover, China is attempting to secure its satellite communications by investing in so-called ‘quantum communications’ — currently considered unbreakable encryption by modern standards,” the report said.

“China’s counter-space capabilities will undoubtedly become increasingly advanced as well, particularly in the area of direct ascent [kinetic kill vehicles],” the report said.

Other types of space war fighting could result in deploying manned combat spacecraft and space-based weapons that can hit targets in the air, at sea or on the ground.

The PLA “continues to develop rapidly across all aspects — hardware, technology, personnel, organization, etc.,” said Brendan S. Mulvaney, director of the Institute. “The PLA’s aerospace forces are, in many ways, leading that change.”

Contact Bill Gertz on Twitter at @BillGertz.

Copyright © 2019 The Washington Times, LLC. Click here for reprint permission.

The Washington Times Comment Policy

The Washington Times welcomes your comments on Spot.im, our third-party provider. Please read our Comment Policy before commenting.

Related Posts:

  • No Related Posts

Adams receives first Simplr Artificial Intelligence and Technology Scholarship

WYOMISSING, Pa. — Ethan Adams, a Penn State Berks junior with a double major in information sciences and technology, and security and risk …

WYOMISSING, Pa. — Ethan Adams, a Penn State Berks junior with a double major in information sciences and technology, and security and risk analysis, is the first recipient of the Simplr Artificial Intelligence Technology Scholarship. He received a $5,000 scholarship, which he plans to use for his tuition and research, as well as in preparation for graduate school.

Adams was encouraged to apply for the scholarship by his program coordinator, Tricia Clark. He also credits his honors adviser, Sandy Feinstein, with helping him to refine his winning essay describing how certain elements of artificial intelligence will play an important role in the future.

Simplr provides e-commerce companies with top-notch, U.S.-based customer service that’s scalable and affordable. Simplr was incubated and funded by Asurion, a global leader in customer service.

The Simplr Artificial Intelligence Technology Scholarship website states, “Ethan’s application, vision and demonstrated aptitude for AI and related technologies made him the clear favorite out of a strong field of competitors.”

A Penn State Schreyer Honors Scholar and vice president of the Penn State Berks Technology Club, Adams is very engaged on campus. As part of his role in the Technology Club, he was instrumental in bringing Benyah Shaparenko, product manager for speech at Google, to campus to give a presentation. Adams also works as an intern for a Penn State Berks student startup company called Traduki, which provides virtual real-time translation services.

Adams stated that his ultimate goal would be to attend graduate school to broaden his education in machine learning and then work for DeepMind for Google in research and development.

When asked how his time at the college his prepared him for the future, Adams stated that learning to communicate was the most valuable skill he has acquired.

“Since I came to Penn State Berks, I’ve learned how to write and communicate effectively. Communication applies to everything that you do. When you can communicate well, it opens many doors for you,” he said.

Adams also stated that he would encourage other students to apply for these types of scholarships.

Related Posts:

  • No Related Posts

Oldtimers Dell and Intel show service mesh newbie Tetrate round the enterprise

… as gRPC and Istio, and the company has some of the core Envoy maintainers on board, Dell Technologies Capital, 8VC, Intel Capital, Rain Capital, …
Service mesh

Since some of the founders are known for their work on central open source projects such as gRPC and Istio, and the company has some of the core Envoy maintainers on board, Dell Technologies Capital, 8VC, Intel Capital, Rain Capital, and Samsung NEXT had no compunction in throwing in $12.5M in funding in to get Tetrate off the ground.

Reading those names may feel a bit weird in this context, but Dell for example owns VMware who bought – yes – Heptio last year and therefore has some interest in the container world. Also Heptio products such as Contour depend on Envoy.

But those aren’t the only big names interested in Tetrate: right off the bat the company is collaborating with Google on operating hybrid cloud environments with Istio, and working with the Cloud Native Computing Foundation. Both are also essential to the Service Mesh Day, a conference focused on all things service mesh, Tetrate’s team is hosting at the end of March 2019 in San Francisco.

Product-wise Tetrate starts off with GetEnvoy and the Tetrate Istio Cloud Map Operator, both available on a signup basis with not much additional information to go with yet. While getEnvoy is a certified build of the Envoy proxy which should facilitate the adoption of the project, the Istio Cloud Map Operator is meant to let services running on Kubernetes clusters communicate with those registered in AWS Cloud Map.

Advertisement

This all plays well to the company’s aim of getting Istio working on a variety of platforms, so that the service mesh can be used on virtual machines, containers, and bare metal offerings and give businesses a leg-up with security, availability and reliability matters.

According to Tetrate founders Varun Talwar and Jeyappragash Jeyakeerthi (who previously worked at…Google! and Twitter respectively), the latter should be “built in to the networking layer”, which is why “Tetrate uses the best in class, battle tested Envoy as dataplane and Istio, which bakes in a lot of learnings from Google’s internal infrastructure to provide a highly reliable and highly available control plane.” The approach they take is a service centric one, since IP based networking might not always do the trick anymore, especially when it comes to securing individual containers.

On top of that the company is “working with the standard Istio model of config distribution that will allow enterprises to reliably scale out as their workload increases”. And since tools can’t be all if you’re looking for an enterprise grade offering, “slack support from trusted maintainers of the project” will also be included in Tetrate’s support portfolio.

– Advertisement –

Related Posts:

  • No Related Posts

Determined AI nabs $11M Series A to democratize AI development

The round was led by GV (formerly Google Ventures) with help from Amplify Partners, Haystack and SV Angel. The company also announced an …

Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. Determined AI, a new startup wants to change that by making the process faster, cheaper and more efficient. It emerged from stealth today with $11 million in Series A funding.

The round was led by GV (formerly Google Ventures) with help from Amplify Partners, Haystack and SV Angel. The company also announced an earlier $2.6 million seed round from 2017 for a total $13.6 million raised to date.

Evan Sparks, co-founder and CEO at Determined AI says that up until now, only the largest companies like Facebook, Google, Apple and Microsoft could set up the infrastructure and systems to produce sophisticated AI like self-driving cars and voice recognition technologies. “Our view is that a big reason why [these big companies] can do that is that they all have internal software infrastructure that enables their teams of machine learning engineers and data scientists to be effective and produce applications quickly,” Sparks told TechCrunch.

Determined’s idea is to create software to handle everything from managing cluster compute resources to automating workflows, thereby putting some of that big-company technology in reach of any organization. “What we exist to do is to build that software for everyone else,” he said. The target market is Fortune 500 and Global 2000 companies.

The company’s solution is based on research conducted over the last several years at AmpLab at the University of California, Berkeley (which is probably best known for developing Apache Spark). It used the knowledge generated in the lab to build sophisticated solutions that help make better use of a customer’s GPU resources.

“We are offering kind of a base layer that is scheduling and resource sharing for these highly expensive resources, and then on top of that we’ve layered some services around workflow automation.” Sparks said the team has generated state of the art results that are somewhere between five and 50 times faster than the results from tools that are available to most companies today.

For now, the startup is trying to help customers move away from generic kinds of solutions currently available to more customized approaches using Determined AI tools to help speed up the AI production process. The money from today’s round should help fuel growth, add engineers and continue building the solution.

Related Posts:

  • No Related Posts

Streamlio, an open-core streaming data fabric for the cloud era

… parallel databases, big data infrastructure, and networking. He was engineering manager and technical lead for real-time analytics at Twitter, where …
Confluent brings fully-managed Kafka to the Google Cloud PlatformThe partnership between Confluent and Google extends the Kafka ecosystem, making it easier to consume with Google Cloud services for machine learning, analytics and more. Read more: https://zd.net/2KLSOn8

Brand new, you’re retro.

This Tricky aphorism of a song came to mind once more a couple of years back, when Streamlio came out of stealth. Streamlio is an offering for real-time data processing based on a number of Apache open source projects, and it directly competes with Confluent and Apache Kafka, which is at the core of Confluent’s offering. What’s the point in doing that?

Also: Processing time series data: What are the options?

In 2017, Apache Kafka was generally considered an early adopter thing: Present in many whiteboard architecture diagrams, but not necessarily widely adopted in production in enterprises. Since then, Kafka has laid a claim to enterprise adoption, and Confluent has acquired open-core unicorn status after its latest funding. This does not make things easier for the competition, obviously.

The question remains then: Why would anybody do this, and how could it work? Streamlio’s answer to the why part seems to be that, despite being new for some, Kafka is retro. As to the how: Any offering seeking to position itself as a Kafka alternative would have to be substantially faster/more reliable, while also being compatible with Kafka and offering the options that Kafka offers.

Now, Streamlio is announcing a managed cloud service, bringing it closer to its vision. ZDNet discussed with Karthik Ramasamy and Jon Bock, Streamlio’s CEO and founder and VP of marketing, respectively, about the vision and its execution.

Real time analytics

Ramasamy’s bio includes over two decades of experience in real-time data processing, parallel databases, big data infrastructure, and networking. He was engineering manager and technical lead for real-time analytics at Twitter, where he co-created the Apache Heron real-time engine.

Also: The past, present, and future of streaming

Ramasamy’s co-founders are Matteo Merli, ex-Yahoo, architect, and lead developer for Apache Pulsar and a PMC member of Apache BookKeeper, and Sanjeev Kulkarni, also former Twitter technical lead for real-time analytics and Twitter Heron co-creator.

The team certainly does not lack enterprise experience, and this is part of Streamlio’s message. That also explains why Streamlio managed to secure Round A Funding of $7,5 million with Lightspeed, which as Ramasamy noted has also been involved in other open-core companies.

Ramasamy noted that Streamlio’s headcount is below 100 people at this point. He also pointed out, however, that Apache Pulsar, which is at the core of Streamlio, has over 100 contributors and 3.000 stars on Github. The other two Apache projects on which Streamlio is based are Heron and BookKeeper.

Pulsar is the upper layer for Streamlio, and offers an API which is Kafka-compatible — although there are nuances to this. There are architectural differences with Kafka, which as per the Streamlio team can be boiled down to the fact that Streamlio has a decoupled layer architecture. What we see as being at the core of this, especially when talking about running Streamlio in the cloud, is BookKeeper.

Book keeping and multi-temperature storage in the cloud

BookKeeper is the storage layer for Streamlio. It was designed with the capability to implement a form of what goes by the name of multi-temperature storage management. Hot data, or data that is recent/frequently used, is kept in faster storage media. Cold data, or data that is less recent/frequently used, is offloaded to slower secondary storage.

Also: Data, crystal balls, looking glasses, and boiling frogs

What makes this particularly relevant for Streamlio’s cloud managed version on AWS is the fact that BookKeeper supports S3, AWS’s storage layer. Streamlio’s executives emphasized that other streaming platforms such as Kafka, Flink, or Spark do not have this capability built-in.

pulsar-topic-segment-offload-s3.png

Apache Pulsar tiered storage, with offloading capabilities.

Kafka storage is centered around an append-only log abstraction, similar to BookKeeper. Flink uses RocksDB as a persistence layer, and Spark uses Parquet. While all of these can be configured to work with S3 in one way or another, Streamlio claims BookKeeper is faster and easier to use, without requiring special configuration and tuning.

BookKeeper is also used by Pravega, and since it seems to be a differentiation point for Streamlio, we wondered how feasible it would be for others to adopt and integrate BookKeeper as well. Ramasamy pointed out that this would require extensive redesign, and the fact that Streamlio offers an integrated stack on top of BookKeeper is part of its value-add proposition.

As is often the case with upstarts claiming superior performance, Streamlio published a benchmark, according to which Streamlio shows up to 150 percent improvement over Kafka in terms of throughput, while maintaining up to 60 percent lower latency. Streamlio’s pricing for its AWS managed version is based on throughput, although it was noted that AWS pricing based on instance capabilities also applies.

Zookeeper and SQL in the cloud

Streamlio also uses Apache Zookeeper, which is considered legacy and a single point of failure, typically used to manage Hadoop clusters on-premise. Using Zookeeper in AWS did not seem to make much sense to us, so we wondered what the rationale was. Ramasamy said that Zookeeper is not used to manage Streamlio, only to serve metadata. He went on to add that Zookeeper is “invisible,” and Streamlio’s cloud service is container-based.

Also: Real-time data processing just got more options

Streamlio also features a number of other interesting architectural choices, including its support for serverless functions, and SQL. The latter is implemented using Presto, the SQL engine open-sourced by Facebook. This, in turn, has some interesting implications.

On the one hand, it means Streamlio benefits by the fact that Presto was designed to support standard ANSI SQL semantics, and it can be used to integrate other sources as well. So, via Presto, Streamlio users can do things such as joining data in Streamlio with external tables, and using BI tools on top of Presto. On the other hand, this design means that queries are not really done on the incoming streaming data in real time.

streamlioarchitecture.jpgstreamlioarchitecture.jpg

Streamlio’s architecture.

When discussing this, Ramasamy said that this was a conscious choice, and it has to do with the overall vision for Streamlio. For Ramasamy, streaming platforms are not meant to replace databases. What he sees as the end goal, however, goes beyond being able to ingest data and dispatch it to the right recipients. Be it via Pup-Sub messaging or Queueing, Streamlio wants to enable its users to run quick analytics over incoming data.

For more in-depth analysis, however, Ramasamy would rather defer to offerings specifically designed for this. What he sees as the role of Streamlio is to act as the data fabric to facilitate data movement, wherever that data may originate from, or be directed to: The edge, the cloud, or the datacenter.

Streamlio’s positioning and strategy

That seems like a well-directed vision for Streamlio. The cloud is here to stay, but on-premise data centers are not going away either, and applications on the edge also need to communicate their data. The million-dollar question is: Why pick Streamlio over a number of alternatives? All data streaming platforms want to play this role, and each of them has some things going for it.

Also: Apache Arrow: The little data accelerator that could

Streamlio, as opposed to Kafka, Spark or Flink, does look like an early adopter thing at this point. Although there really seem to be technical benefits to Streamlio’s architecture, the reality is the competition is ahead in terms of maturity, adoption, funding, and mindshare. But that’s not to say Streamlio is a lost cause, or that nobody is using it — far from it.

Besides being used in production at Yahoo and Twitter, Streamlio has adopters such as Zhaopin (Monster.com company in China) and STICorp to show for. STICorp actually used Streamlio to replace Kafka, although it’s worth noting here that Ramasamy pointed out Streamlio is not a drop-in replacement for Kafka.

fancycrave-224908-unsplash.jpgfancycrave-224908-unsplash.jpg

A data fabric is a metaphor used to denote a layer weaving data from disparate sources together.

(Image: Fancycrave on Unsplash)

There is API compatibility, but the way it works is by passing code utilizing Kafka API calls through a tool which replaces them with corresponding Streamlio API calls. Ramasamy noted that this guarantees functional equivalence, but it does not mean there is 100 percent correspondence between Kafka and Streamlio APIs, as they reflect different underlying models. Streamlio also noted that there is a prototype integration with Apache Beam, which they will develop further if there is sufficient customer interest.

A broader point to make here, drawing on the comparison between Confluent and Streamlio, would be that of doing open source business. Especially in the light of AWS’s fork of Elastic, the latest episode in an ongoing escalation between open source enterprise vendors and AWS. If Streamlio is as successful as others in the market, would it not be yet another target for AWS appropriation? How would it respond to that?

Ramasamy thinks 2019 will mark the decline of open source support as a business model, and the rapid rise of open-source SaaS as a growth market and key business model for open source overall. He predicts we’ll see vendors seeking to compete and differentiate on their ability to provide the best possible software-as-a-service — but leveraging open source technology instead of a proprietary offering:

“We’ll see [vendors] work to provide value-added flexibility, elasticity and performance specific to cloud and SaaS environments in order to deliver what customers increasingly see as the most important value-add: Ensuring that customers can focus on building their applications, and spend less time on care and feeding of the underlying technology that those applications use.”

That seems to be reflected in Streamlio’s strategy, too. Take open-source components, integrate them, extend them, and build a commercial offering on top of it. Whether that is the end-all in open source is a different discussion. But it is what Streamlio is betting on.

Related stories:

Related Posts:

  • No Related Posts