Bitcoin, Ethereum are most profitable investments of the decade

As the decade draws to a close, it’s time to look at the investments that were the most successful. And, unsurprisingly, cryptocurrencies top the list of …

As the decade draws to a close, it’s time to look at the investments that were the most successful. And, unsurprisingly, cryptocurrencies top the list of the most profitable investments of the decade.

Up first, is Bitcoin. The first cryptocurrency, built by an anonymous programmer known as Satoshi Nakamoto, it led to the creation of many Bitcoin forks—alternative versions running on similar code—and thousands of altcoins, either using the same code or trying out new features. But, if you got in early, you had the chance to make a quick buck.

Since the first bitcoin was available for trading, its price has accelerated 62,500 percent. Outshining many traditional stocks, it even spawned an entire culture built around prices “mooning” and the promise of lovingly labelled “lambos.” Due to the extreme rise, many critics have called it a Ponzi Scheme and say that its price pumps are bubbles that keep popping. But despite the criticism, an entire industry has been built around Bitcoin and other cryptocurrencies, leading many countries around the world to start adoption blockchain technology.

Much of the promise of blockchain technology can be seen with Ethereum. It offers features known as smart contracts, which allow for the creation of decentralized apps. These have interesting applications, particularly in the world of finance.

The price of Ethereum has shot up too. Even though the price has dropped heavily since its all-time high in January 2018, the price of Ethereum is still up by 17,900 percent. One ETH is currently worth $132.

However, some traditional stocks have not been far off. Netflix had a strong performance this decade, rising 4,280 percent. It’s not too surprising given how ubiquitous it now is. Even new films are now launching on Netflix instead of heading to the cinema. But it’s epic rise has led to an increase in the number of competitor video streaming companies. Will it be able to fend off the competition going into 2020?

Along with the rise of Netflix, and watching TV at home in general, another company did particularly well. Domino’s Pizza saw an increase in share price of 3,000 percent. Who knew pizza and TV were a winning combination?

In line with the trend of not needing to go outside, Amazon grew considerably in the last decade, rising 1,250 percent. It’s worth noting that not only does Amazon ship products to your door but it also offers a TV streaming service. What’s next, Amazon pizza?

Those doing yoga, trying to work off the 1,000 calorie pizzas, helped to boost the price of Lululemon shares, a retailer known for creating activewear and clothes for “most other sweaty pursuits.” They rose by 1,300 percent.

On a different track, healthcare company Abiomed saw a 2,000 percent rise in the last decade. It creates medical devices, such as artificial hearts.

Shotly behind Amazon is NVIDIA, known for creating computer chips. Interestingly, it pulled in $1.95 billion in revenue from its crypto mining business. But it wasn’t without controversy. In September, critics accused it of surreptitiously influencing the development of an upgrade to the Ethereum network. But nothing was ever proved.

Other profitable investments of the decade were payments processors, including Mastercard and VISA, up 1,100 percent and 760 percent respectively. Google shares rose by 350 percent and Apple shares went up by 840 percent.

Related Posts:

  • No Related Posts

Technology And Society: Can Marketing Save The World?

Few administrations and a handful of companies are charting the road of post-quantum encryption. The U.S. is one of those. The National Institute of …

PHOTO

Getty

In 1991, Stuart Haber and Scott Stornetta worked to develop uncrackable encrypted stacks of blocks, creating a database nobody could tamper with. At that time, they likely could not have imagined this technology would become the foundation of blockchain. Blockchain was born after Satoshi Nakamoto’s paper in 2008 about cryptocurrency that unraveled the many more applications this technology could have.

The foremost practical benefit of blockchain, in any application, is that of taking away reliability from humans and putting it into machines. It is the ultimate automated trust it generates through an uncrackable system of collaborating computers that creates encrypted blocks that guarantee the security and authenticity of any transaction or interaction, avoiding data bridges and human intermediation

In the last 10 years, we’ve seen the birth of several initiatives and organizations that are attempting to make the most out of this technology. It seems we are on the verge of a revolution that will change our lives in much the same way personal computers did throughout the last 30 years.

While it seems clear the value this technology may bring, we tend to forget that most technologies used today are data-driven, running over binary systems. Blockchain, artificial intelligence (AI), the internet of things (IoT), industry 4.0, autonomous vehicles and most of the amazing achievements of the last 50 years are based on this type of computing. What would happen if these types of binary systems became obsolete?

Change Is The Only Constant

With the technology we have today, cracking current encryptions that guarantee cryptocurrency security through blockchain is not an easy feat. That is what makes blockchain a safe place to authenticate transactions. But what if a new type of computer could do it in just minutes? What’s known as a quantum computer is already used by companies like Google and IBM.

Suddenly, blockchain, the technology that was supposed to change the future, becomes obsolete, and with it, most attempts to be its early adopters. Few administrations and a handful of companies are charting the road of post-quantum encryption. The U.S. is one of those. The National Institute of Standards and Technology (NIST) has already identified 26 algorithms that could become the standard to protect information today and tomorrow.

But there is no reason for panic. As Ian Kahn mentions in his acclaimed “Blockchain City” documentary, “Tomorrow is not here yet,” and it seems, as he also reminds, that our tomorrow is made of the only constant there is: change. Through constant change, evolution is happening at an accelerating pace, giving us little time to adapt and transforming governments, organizations, companies and consumers all into forced early adopters.

While quantum computers may seem a giant bridge, it is no different than all the other technologies we are benefiting from and do not realize we are using. As consumers, we do not understand internet protocols, and yet, we buy online every day. With quantum technology, it will happen the same: We may not understand it, but we will still run applications that will reap the benefits of this giant disruption that will boost innovation in a way we cannot even imagine.

I believe quantum computers are the new giant leap by humankind that will boost our capacity to understand, learn and build. With them, we will be able to open the doors to unimaginable discoveries and possibilities that will likely make us look like aliens on our own planet. This is the power that is being unleashed for which we will have to work on defining a purpose beyond profits and power, securing its use for the benefit of all. Dreamers will no longer exist the way we know them today.

Innovation Must Have A Greater Purpose

After many years doing marketing for companies of all sorts and sizes on three different continents, I came to the conclusion that focusing on technological innovation only could be a fatal — or at least dangerous — mistake. Marketing is one of the industries that has embraced and adapted to these new technologies at a really fast pace. However, having the power unleashed through technology is not enough if you don’t have a clear aim, and that aim cannot be only profits.

Technology, in most cases, increases efficiency. In essence, we achieve the same results, but faster, safer, in a cleaner way, with fewer resources. Take marketing, for instance: Social media, digital environments and IoT are all techniques marketing is using to the benefit of businesses’ profit and loss. Yet, these technological innovations are obtaining the very same results, though more efficiently, than our old, traditional, nondigital media: reach and segmentation.

I believe society is clamoring for a different impact. Innovation in technology is not enough. We need to innovate in management models that can guarantee, through the use and development of new technologies, that the impacts we generate are different. We need a broader base of prosperity that generates larger social equity and improves our environment.

Richard Branson has stated, “The brands that will thrive in the coming years are the ones that have a purpose beyond profit.” The future is now, and companies need to use technologies, products and services that allow them to go beyond, but never forgetting, profits.

Looking To Marketing As A Model To Follow

Marketing is the leverage that can serve as a bridge between corporations and society at large, launching profitable projects that also have social and environmental impacts. Marketing can also make consumers understand that they have the collective power, fostered by individual behavior, to demand those kinds of projects while accepting that companies make money along the way. It’s not bad to make money while helping others and the environment, and it is necessary to make those improvements sustainable.

Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?

Related Posts:

  • No Related Posts

Explainable AI: The Rising Role Of Knowledge Scientists

Artificial intelligence is expected to create trillions of dollars of value across the economy. But as the technology becomes a part of our daily lives, …

Photo:

Getty

Artificial intelligence is expected to create trillions of dollars of value across the economy. But as the technology becomes a part of our daily lives, many people are still skeptical. Their main concern is that many AI solutions work like black boxes and seem to magically generate insights without explanation.

At the same time, knowledge graphs have been recognized by many industries as an efficient approach to data governance, metadata management and data enrichment and are increasingly being used as data integration technology. But knowledge graphs are also more and more identified as the building blocks of an AI strategy that enables explainable AI through the design principle called human-in-the-loop (HITL).

Why does artificial intelligence often work like a black box?

The promise of the AI, which is based on algorithms of machine learning such as deep learning, is to automatically extract patterns and rules from large datasets. This works very well for specific problems and, in many cases, helps automate classification tasks. Why exactly things are classified in one way or another cannot be explained. Because machine learning cannot extract causalities, it cannot reflect on why certain rules are extracted.

Machine learning algorithms learn from historical data, but they cannot derive new insights from it. In an increasingly dynamic environment, this is causing skepticism because the whole approach of deep learning is based on the assumption that there will always be enough data to learn from. In many industries, such as finance and healthcare, it is becoming increasingly important to implement AI systems that make their decisions explainable and transparent, incorporating new conditions and regulatory frameworks quickly. See, for example, the EU’s guidelines on ethics in artificial intelligence.

Can we build AI applications that can be trusted?

There is no trust without explainability. Explainability means that there are other trustworthy agents in the system who can understand and explain decisions made by the AI agent. Eventually, this will be regulated by authorities, but for the time being, there is no other option than making decisions made by AI more transparent. Unfortunately, it’s in the nature of some of the most popular machine learning algorithms that the basis of their calculated rules cannot be explained; they are just “a matter of fact.”

The only way out of this dilemma is a fundamental reengineering of the underlying architecture involved, which includes knowledge graphs as a prerequisite to calculate not only rules, but also corresponding explanations.

What is semantic AI, and what makes it different?

Semantic AI fuses symbolic and statistical AI. It combines methods from machine learning, knowledge modeling, natural language processing, text mining and the semantic web. It combines the advantages of both AI strategies, mainly semantic reasoning and neural networks. In short, semantic AI is not an alternative, but an extension of what is currently mainly used to build AI-based systems. This brings not only strategic options, but also an immediate advantage: faster learning from less training data, for example to overcome the so-called cold-start problem when developing chatbots.

What is a knowledge scientist?

Semantic AI introduces a fundamentally different methodology and, thus, additional stakeholders with complementary skills. While traditional machine learning is mainly done by data scientists, knowledge scientists are the ones who are involved in semantic AI or explainable AI. What is the difference?

At the core of the problem, data scientists spend more than half of their time collecting and processing uncontrolled digital data before it can be explored for useful nuggets. Many of these efforts focus on building flat files with unrelated data. Once the features are generated, they begin to lose their relationship to the real world.

An alternative approach is to develop tools for analysts to directly access an enterprise knowledge graph to extract a subset of data that can be quickly transformed into structures for analysis. The results of the analyses themselves can then be reused to enrich the knowledge graph.

The semantic AI approach thus creates a continuous cycle of which both machine learning and knowledge scientists are an integral part. Knowledge graphs serve as an interface in between, providing high-quality linked and normalized data.

Does this new AI approach lead to better results?

Apart from its potential to generate trustworthy and broadly accepted explainable AI based on knowledge graphs, the use of knowledge graphs together with semantically enriched and linked data to train machine learning algorithms has many other advantages.

This approach leads to results with sufficient accuracy even with sparse training data, which is especially helpful in the cold-start phase, when the algorithm cannot yet draw inferences from the data because it has not yet gathered enough information (see also: zero-shot learning). It also leads to better reusability of training datasets, which helps to save costs during data preparation. In addition, it complements existing training data with background knowledge that can quickly lead to richer training data through automated reasoning and can also help avoid the extraction of fundamentally wrong rules in a particular domain.

Developing An Interest In Semantic AI

If you are a data scientist or data manager — or if you manage someone in such a position — it’s important to start digging into this research and developing the skills to work with semantic AI.

Semantically enriched data serves as a basis for better data quality and offers more opportunities for feature extraction. This leads to higher accuracy in prediction and classification, calculated by machine learning algorithms. Furthermore, semantic AI should create an infrastructure to overcome information asymmetries between AI system developers and other stakeholders, including consumers and policymakers. Semantic AI ultimately leads to AI governance that works on three levels: technical, ethical and legal.

Most ML algorithms work well with either text or structured data. Semantic data models can close this gap. Relationships between business and data objects can be made available for further analysis. This allows you to provide data objects as training datasets composed of information from structured data and text.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Related Posts:

  • No Related Posts

2019: Recap On Tech Updates That Made Headlines

The year began with major announcements in big data analytics and growth, such as making use of big data in banking and analysis on how it would …

The phenomenon of year end blues is a real thing and psychology accounts for it. A rush of winter breeze, bunch of holidays, family visits and decoration would be contributors to lightening your mood but sometimes, the opposite happens.

Psychology reasons this with factors such as less sunlight, fear of ending the year and missing things or people that aren’t around anymore. Taking cue from this, your workplace could also be experiencing a gloom with most of your co-workers on leave and the slow pace of work affecting your productivity.

What do you do when you find yourself heading towards blues? You recall every good thing that happened in the year.

Here is all the ‘Good News’ IT sector gave us in 2019:

Big data finds Kubernetes

What most IT experts express as an extremely valuable asset in the IT sector, Big Data doesn’t disappoint. The year began with major announcements in big data analytics and growth, such as making use of big data in banking and analysis on how it would change the IT industry.

Big companies such as Google Cloud and SalesForce acquired companies for data analytics and visualisation shows the growing need for managing Big data.

Development in terms of technology indicates that the application of big data across cloud systems was joined by the use of Kubernetes. Originally designed by Google, kubernetes is an open source container- orchestration system that automates the deployment of applications. It has been of tremendous help to data engineering teams for micro services.

Read more: Why Organizations Want A Kubernetes-Based Container Service For Cloud

Stronger momentum in AI growth

Artificial Intelligence and machine learning have stolen maximum limelight in 2019. There have been so many new AI excellence centres and AI courses being set up across the globe and especially in India. Facial recognition systems, AI based monitoring tools, automation and the move on firms to AI based systems, are some of the major events that outshined in 2019.

Few examples:

  • The Indian police has turned to AI based facial recognition in catching criminals and finding missing children.
  • Uber, Ola, Swiggy have also used AI based tools to understand customer preferences and monitors rides/ delivery.
  • A lot of organisations have automated their regular tasks with AI such as filtering applications. There has been a lot of development in the automated vehicle industry which used machine learning and AI to gather data for response.

This only highlights AI’s deliberation at reaching a revenue of $118.6 billion by 2025 as reports predicted.

Companies investing in IoT like never before

In the words of Jared Newman- “And just like any company that blissfully ignored the Internet at the turn of the century, the ones that dismiss the Internet of Things risk getting left behind.” 2019, however, has been a year where companies have actively indulged in bringing IoT into their workspace and developing more on that base.

  • Microsoft announced its acquisition of Express Logic that provides real time operating system called ThreadX. They have done this with the expectation of increasing their IoT device coverage which includes highly constrained devices.
  • SAP launches their Leonardo IoT platform which easily combines IoT data with data from business processes. This is remarkably good for organisations that are headed towards digital transformation.
  • Cisco, is another high profile company that has placed trust and effort in IoT development with its plan of acquiring Sentryo to improve device visibility and security of their control systems. They also revealed that they will be moving to edge based networks in this year.

The character arc of IoT will be shaping better for the coming year with companies taking interest in it like never before.

Read more: Data, Automation, IoT Will Enable Virtual Societies In 2020: Report

Laws for data privacy on the way

It has been a rather difficult year for companies that store highly sensitive user data such as Facebook, Google, WhatsApp, Twitter, etc. Along with major breaches on social media, even medical and government organisations have been vulnerable to cyber attacks and loss of data.

Read more: 7 High Profile Data Breaches That Shook 2019

2019 has pushed the need for a data protection bill aggressively. Indian government manages to bring a cabinet nod for the Personal Data Protection Bill which is being challenged in the parliament currently. The bill works towards the privacy of data of individuals and sets a framework on how organisations will be allowed to access their data.

Once passed, this bill would define what qualifies as a breach and action that will be taken against it. That’s fantastic news for citizens!

In conclusion…

What has proved to be a start of new developments in 2019 gives a base for 2020 to grow upon. The enthusiastic exploration of Data, AI and ML will fast track digital transformation for companies and regular lifestyles alike. The interest and investment of companies in certain technologies like IoT and automation can create a stronger connection for devices and also human beings. Lastly, with so many emerging technologies and their vast capabilities, legal framework of privacy laws will keep in check the sanctity of the society.


If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

Advertisement

Related Posts:

  • No Related Posts

Why This Delhi-Based Startup Prides Itself As The McDonalds Of Geospatial World

The synergistic integration of artificial intelligence (AI) and the geographic information dimension creates geospatial artificial intelligence (GeoAI).


The synergistic integration of artificial intelligence (AI) and the geographic information dimension creates geospatial artificial intelligence (GeoAI). Geo-tagged big data collated from varied sources, such as satellite imagery via remote sensing, IoT sensors in smart cities, social media streaming, and personal sensing via connected ambient and wearable sensors, can be analysed using GeoAI to get actionable insights.

Founded in 2017, Attentive AI was started by an IIT Delhi core team comprising of Shiva Dhawan, Utkarsh Sharma and Sarthak Vijay. The company was established to develop artificially intelligent systems that can analyse petabytes of geospatial imagery and convert it into accurate insights. It serves geospatial technology providers and end-users with 2D and 3D vector data extracted from satellite, aerial, street and drone imagery.

The mission of Attentive AI is to convert all the data collected by satellites, aeroplanes and drones into actionable insights, and help businesses, governments and non-profit organizations in reaching meaningful conclusions and making better decisions faster.



Every startup has its challenges and everyone should be prepared to tackle those, said Shiva Dhawan, founder and CEO of Attentive AI. “The biggest challenge for any startup is running out of money. Some run out of funding whereas some run out of paying customers,” said Dhawan.

He further said, “From the very beginning, we knew that we would face the latter problem rather than the former. However, we were also confident that overcoming this problem would lead us to build a sustainable business, and so we relied on our customers to sustain ourselves as well as to grow. As a result, we tried to be as frugal as possible with our expenses. For instance, for a long time, we did not work out of a high-end corporate office instead, we worked out of a couple of apartments which were converted into modest office spaces.”


W3Schools


The key to success is to build a team that understands the importance of frugality and enjoy the startup journey just as much as you do, added Dhawan.

Attentive AI’s approach involves experienced computer vision team that prepares deep learning models and are trained on geospatial imagery data, in-house annotators process machine-generated data which fix gaps and inaccuracies, expert in-house quality control team that ensures nearly 100% correctness through pain-staking visual inspection, and client monitors live production until the map features are delivered in the desired file format.

Standing Out Of The Crowd

With the Chinese government leading the AI race with their expansive surveillance system and heavy investment in AI research and skill development, the Asian governments still lag in terms of implementing AI initiatives.

With such a scene in hand, Attentive AI has come up with an innovative solution to deliver the best GIS solutions using high-quality digital maps. MapX is Attentive AI’s flagship product that can be used to request meaningful insights from geospatial imagery data. The core of the product envelops a simple workflow allowing each user to select an address or an area, followed by requesting the analytics they desire from a list of available analytical options or can even go for a custom request followed by near-instant delivery of output/insights.

MapX prides itself on being the ‘McDonalds’ of the geospatial world. Thus, one does not have to wait a long time to get the geospatial dataset or a digital map. “Initially, when McDonald’s had started, people said it was impossible to deliver high-quality burgers in such a short time but they did it, and their key to success was their engineering process. Similarly, delivering high-quality land features almost instantly was also impossible, but Attentive AI makes it possible by an engineering process at the core of which is our AI algorithm supported by a detailed QC process,” said Dhawan.

Being a customer serving web platform, MapX also makes the experiences of ordering geospatial data very seamless for customers and users. “Our customers attest to the seamless geospatial data ecosystem that we are creating.”

Surviving Industry Challenges

AI, being one of the newest innovative technology with a maximum number of commercial benefits, comes up with several industry challenges. The major limitation in the AI industry is customer awareness, or the whole concept of AI, where it is to be believed that AI itself is the solution to all problems. However, that is not true as AI technology alone can not provide a complete solution. AI does aim to solve the most complex steps, but the whole solution can only be created with a combination of multiple technologies including AI.

Explaining this, Dhawan said, “One of the major challenges of being an AI service provider is that a customer’s expectations. Usually, customers tend to have high expectations from AI rather than what is currently possible. And, this is mostly because of the increased hype around AI and also because a lot of companies project a higher AI capability than what is possible at the current stage.”

“These companies show pilots on specific and suitable examples, however, the truth is that scaling an AI is incredibly hard and it takes a significant amount of time to build an AI that is accurate on all possible user scenarios. We, at Attentive AI, try to mitigate this challenge by educating our customers transparently about the training period and the processes that are necessary to make the AI scalable. Thus our clients understand that the AI will not be scalable from the first day, rather through an iterative and active learning mechanism, which will help it to grow, and be more efficient,” said Dhawan.

In the rising age of AI, the key to success is to build artificial intelligence systems for a specific niche datasource to harness the power of data network effects, which Attentive AI is acing with their high-resolution geospatial imagery. “This gives us a competitive advantage while making cutting edge breakthroughs every week since we have worked on multiple use cases over a period from which our AI systems are continuously learning,” said Dhawan.

Future Prospective

Attentive AI, being one of the AI service provider, aims to create an accurate, constantly updating digital twin of the physical world. “We have only touched the tip of the iceberg as we are creating more AI technologies to analyse aerial imagery in specific geographies,” said Dhawan.

“We aim to build a global repository of multiple geospatial imagery sources and a suite of intelligent analytics for customers to request analytics on drones, streets, light detection and ranging (LIDAR), and all other kinds of geo-data sources at any time from anywhere,” concludes Dhawan.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


FEATURED VIDEO

Provide your comments below

comments

Related Posts:

  • No Related Posts