UPDATED 18:02 EST / FEBRUARY 12 2021

CLOUD

Future of compute will be big, small, smart and way out on the edge

Predicting the future of compute is a little like forecasting an afternoon of high winds and rain on a calm, cloudless morning. It may not seem obvious, but change is coming. It always does.

What change will look like for the computing world depends on how the strategies being put in place now by major cloud providers and an expanding ecosystem of technology firms become more widely adopted by enterprises at scale.

Compute companies are thinking big in terms of expanding the capabilities of the cloud, but that will also mean putting more processing power into smaller devices, mass application of artificial intelligence and machine learning, and deploying workable, efficient solutions at the edge.

“At Microsoft, we think of the cloud as the world’s computer,” Mark Russinovich, chief technology officer at Microsoft Azure, said during an appearance this week as part of MIT Technology Review’s “Future of Compute” virtual event. “How can we define a uniform computing environment? Ideally, developers should be writing code once and deploying it across the entire spectrum. This is one of the big challenges we’re facing, there is still a ton of work and innovation left to be done.”

Rise of exascale computing

If the cloud is indeed going to be the world’s computer, then compute itself must become bigger, better and faster. Uncontrolled data growth combined with the growing need for AI, simulations and modeling workloads have created the ultimate mission-critical workflow.

That will require exascale or high-performance computing through technologies introduced by enterprise vendors such as the combined resources of Hewlett Packard Enterprise Co. and Cray Inc. Exascale, defined as the ability to produce more than a quintillion double-precision floating point calculations per second, has expanded to the reality where there is now a “TOP500 list” of the fastest supercomputers in the world.

A key development in the exascale computing scene will be the ability to leverage compute power to move large amounts of data and applications on and off machines. An example of this can be found in HPE’s Slingshot interconnect which can manage data-intensive workflows through communication via Ethernet running in data centers around the globe.

“We’re building these systems to perform like a supercomputer, but run like the cloud,” Peter Ungaro, general manager of high-performance computing and mission critical solutions at HPE, said during a presentation at the MIT event. “It’s a really fundamental new generation of technology.”

Building the internet computer

While high-performance computing represents a major trend in expanding compute power, there’s also an effort gathering steam to transform who controls the servers themselves. Funded by crypto token sales and investors such as Andreessen Horowitz, the Dfinity Foundation is seeking to create one of the biggest transformations in tech: making the internet itself the world’s largest computer.

Dfinity’s concept of an internet computer is built on decentralized technology fueled by independent data centers where software runs anywhere on the internet rather than on servers controlled by major cloud providers such as Microsoft Corp. or Amazon Web Services Inc.

By wresting control of compute networks away from advertising or fee-dependent providers, Dfinity envisions a return of the internet to its roots as a free market where innovation can thrive. Dfinity opened to developers last year and unveiled its governance system and token economics in September. In December, it launched the mainnet of its internet computer.

“The internet computer is an extension of the internet that takes it from being an open permissionless network to also being an open permissionless compute platform,” said Dominic Williams, president and chief scientist at Dfinity. “It will not only change the way we work, but enable us to innovate in completely new ways. Ten years from now, it will be absolutely clear that the internet computer is going to win.”

If Williams’ vision indeed becomes reality, it could have a major impact on companies which have made big bets on the marriage of proprietary cloud and on-premises data center architectures. IBM Corp. is one firm that falls squarely in this hybrid model.

IBM CEO Arvind Krishna (Photo: MIT Future Compute livestream)

The company has moved aggressively to capture the hybrid cloud market through its acquisition of Red Hat Inc. and its OpenShift platform for $34 billion in 2018. In an appearance during the MIT event this week, IBM’s chief executive Arvind Krishna made it clear that his company has no intention of departing from that strategy.

“Hybrid is where the money is, it’s where the next generation architecture is going,” Krishna said. “With only 20% of workload in public, it should have moved a lot faster and the fact that it has not tells you there is a lot of demand for both public and private infrastructures. Is it a phase or is it a destination? History will show who is right.”

Moving deep learning to devices

However, the future of compute is not all about massive amounts of computational power or global distributed systems. There is an effort underway to take deep learning away from large systems where it has been traditionally processed and miniaturize it directly at the device level.

MIT researchers have developed a system called MCUNet that designs compact neural networks for deep learning on “internet of things” devices. In a presentation from Ji Lin, a Google Scholar and Ph.D. student working under the auspices of AI pioneer Song Han, evidence was offered which showed how deep learning could be performed using microcontrollers.

Using what Lin described as “tiny NAS” or neural architecture search and “tiny engine” to generate the code needed to run the NAS, researchers were able to classify more than 70% of novel images from the ImageNet database on a microcontroller.

Advances such as those will have implications for both the impact of AI and machine learning in enterprise computing and deployment of smart technologies to the edge.

One area where acceleration of machine learning could have a significant influence is in industrial robotics. Amazon.com Inc. has developed a number of offerings to support this space and the retail giant points to its own massive network of automated fulfillment centers as a prime example of how the use of simulation can make a major difference in training data for robotics.

“Our goal is 20 minutes from the time you order something to when it gets placed on a truck and on its way to you,” said Bill Vass, vice president of engineering at AWS. “Those fulfillment centers were designed based on simulation. The robotics manufacturers need to embrace simulation and machine learning. They have been slow to do that up to now.”

One company that’s all-in when it comes to the use of AI and machine learning is the cloud-based platform-as-a-service provider ServiceNow Inc. The company made four analytics-related acquisitions over the past year, the most recent being the purchase of Element AI in December.

“We’ve seen widespread adoption of machine learning across our enterprise,” said Chris Bedi, chief information officer at ServiceNow. “We really have an opportunity as technologists to bring machine learning into everything we do. I firmly believe it will deliver the next exponential jump in productivity for every company that uses it.”

A common theme running throughout the discussion around the future of compute was general acceptance that much of the enterprise action was moving to the edge. Yet to be determined are the technologies which will give major players a competitive advantage in that space.

One of the trends on the radar is the concept of fog computing, a decentralized computing environment positioned between cloud and edge where data and applications can be manipulated without device limitations or the expense of big server infrastructure.

Cisco Systems Inc. claimed credit for coining the term in 2014 and has been active in building solutions around it. There are a number of tech companies considered to be major players in this segment, including Microsoft.

However, in his discussion during the MIT event, Microsoft’s CTO sounded a note of skepticism about the future of fog.

“I’m skeptical that fog will become the general architecture for edge computing,” Russinovich said. “We’re still figuring out what these edge topologies will look like.”

Regardless of how edge technologies get sorted out, there can be little doubt that the coming of a new 5G wireless standard will shape much of computing’s future for years to come.

“5G is going to drive more edge compute than we’re used to,” said IBM’s Krishna. “It will take the next two to four years to play out. To paraphrase Arthur C. Clarke, technologies underwhelm us with adoption in the first three or four years and then overwhelm us in the next part.”

Image: Harri Vick/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU