Share

business

11min read

Tech Trends 2020

Tech Trends 2020

Intro

Tech trends for 2020 have already been described by many authors. A short web search of the phrase shows plenty of results about AI products, automation and human experience, autonomous driving, ethical technology, DARQ, distributed cloud, and the list goes on. All those articles are very inspiring, but I don’t find them practical enough. That’s why in this post, I’d like to share more straightforward tech insights. It’s worth mentioning that this article is a collective work of experts in particular domains, which—I believe—makes the content more relevant and valuable.

Our take on tech trends 2020 covers three areas:

  • cloud, which has its momentum,
  • mobile, that has its stable place in the industry and constantly evolves,
  • embedded, which is kind of a niche but becomes significantly impactful.

Cloud

The future of the cloud is bright. While Amazon still dominates the space, others—especially Microsoft and Google, are catching up. Looking at the local (Polish) market, the opening of the Warsaw GCP region is a big step towards Google’s expansion in CEE countries. Moreover, the companies that were the stronghold of on-premises solutions—such as banks and telecoms—are finally making the decision to move their data to the cloud. And that’s just the beginning.

Kubernetes and Cloud-Native approach is taking the world by storm. However, we can notice a few cracks in the otherwise seemingly unstoppable march of related technologies.

KNative and serverless

KNative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. KNative and serverless approach was already trending last year. However, there is growing evidence for serverless approach limitations. It might be especially painful for those who blindly follow this trend without proper validation of their case. It seems that in this trend, we are just past the “trough of disillusionment” phase, and the “slope of enlightenment” is ahead of us.

Istio

Istio is an open-source independent service mesh meant to connect, monitor, and secure microservices. While promising—it is yet to be battle-tested. Istio is great in theory, but configuring and using istio-run services is proving to be pretty hard to set up and manage. It seems that network configuration is one of the areas that got too little love and attention from the cloud providers. Especially in terms of usability, setup, configuration, and operational difficulty. Even considering strong security property, networking setup is too complex and demanding. To make this technology accessible, it needs to become easier to set up and maintain while providing secure deployment of your services within. Cloud providers that will address this issue will definitely gain an advantage.

text

Kubernetes

Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. It solves many problems but does not serve all cases. While it can resolve typical issues for many services and companies, its design and complexity prevent its wide use. There are many cases where Kubernetes is simply too complicated and doesn’t benefit the user in a way that would justify the investment. That’s why in 2020, we can expect to see some competitors in this area—simpler, leaner, maybe even less capable, but easier to set up and maintain. They could be “hybrid” solutions—bringing together the management of Kubernetes environments in multiple clouds and on-premises. They might also cover the non-kubernetes deployments.

The interesting tool worth mentioning along with Kubernetes is Helm. Helm is a tool to manage pre-configured Kubernetes resources called Charts. Thanks to Charts, it’s easy to set up a toolset of choice such as Airflow, InfluxDB, Jenkins, or Hadoop. With Helm application it is easy to create your own configurations by making YAML templates. There is a dedicated repository created by Helm, where it’s possible to find various Charts with ready recipes for your infrastructure.

We should also mention still-trending microservices. Both new and existing monolith applications are (re)organizing architecture to microservices. Thanks to the cloud solutions (like Kubernetes), it is easy to develop new applications as serverless, which then can be effortlessly scaled. The significant advantage of dividing such services is the maintenance of a smaller codebase. It allows us to have a dedicated team of developers who will be responsible only for their part that won’t directly influence the other ones. If we need a custom service that can’t be provided by cloud solutions (like SaaS), there is an option to have Java, Python, or Go team working simultaneously. It can result in shorter development cycles.

With the growing distributed architecture of applications, there is a need to orchestrate and monitor all connections of services—called service mesh. That’s why we see tools such as Istio, Consul, Linkerd trending—used in microservice applications, they allow monitoring, control access, service-to-service authentication, and load balancing.

Edge Computing

In some industries, there is a considerable need for managing off-the-cloud solutions (think factories, wind farms, micro-power-plants, etc.). There you need decent computing power available much closer and more reliably than in the nearest data center. You simply cannot entirely rely on constant connectivity, yet you want to manage a fleet of machines and services running on them.

Currently, (December 2019, re:Invent announcements) edge computing is defined as “close to user,” “single-digit-millisecond latency” figures. However, it seems to be more than that. Cloud providers—if they want to enter this area—need to be ready to provide a “cloud-in-a-box” approach. It means they should supply ready-to-use hardware and software solutions that industry could install easily and manage at the physical location, where they are actually needed. This requires some unique hardware, software, networking, and power mix-and-match that could provide container-like tangible solutions for the industry. They should be easily combined and grouped together to form local computing grids that can be centrally managed. That is the true meaning of “edge computing.”

We are at the very beginning of this revolution, and it will take many years to develop this technology fully. But we can already see some building blocks that might enable it—e.g., Project EVE, which is an OS that allows you to deploy and manage Cloud Native apps.

Data in the Cloud

Currently, there are many options for storing and processing data provided by big players like GCP or AWS. And new ones are still being created. A good example is Snowflake that offers good old SQL in a SaaS model. The tool offers features such as flexible scalability, an instant clone of your DB, and seamless integrations with many services. All powered by millions of developers familiar with SQL and with tons of legacy software that can be migrated to Snowflake without major redesign! I believe we’ll see more snowflake-like solutions very soon, but right now, Snowflake has the advantage of being the first in the field.

DevSecOps

Security is still a strong trend and it’s gaining more attention in DevOps culture, resulting in a new kind of role: DevSecOps. When the process of software development is divided between various teams and split into phases, it is difficult to distinguish the moments crucial for ensuring app security. In the initial stages, the PoC is the most important part of the project—the whole environment development is progressing fast, often with security avoidance to speed up product and functionality delivery. Then developers are focused on new features. Security is added “in the meantime” or, in case of the infrastructure, it is often postponed until the production phase. This often results in a feeling of a “blurred” responsibility for the application’s security. Thanks to adding the security part to DevOps, it’s possible to check application vulnerabilities during each development cycle, by adding embedded security checks in CI/CD pipelines.

text

Development

There is an urgent need to simplify and streamline the development of cloud services. It is where the cloud approach has yet to be implemented. Currently, there is a far too big disconnection between production-ready cloud-run deployments and development environments used by people who create them. The road from writing a line of code, through testing it in a private environment, staging, and production, is often long and painful. We’re missing an environment where on the one hand, you have a well-managed production service and on the other, as a developer, you can get an environment resembling the production setup up and running in a few minutes. That would allow you to start iterating it and make your changes reach production quickly.

Currently—especially in the microservice-hyped world—it’s often more difficult to set up and manage your development environments than the production ones. It seems that we’ve moved in the direction of making the production environments easy to manage. Still, we somehow forgot about developers building them and how important it is to make their lives easier. This is especially important in the world, where we need to iterate quickly and deliver business value faster than ever. Data scientists often experiment with their data in quick iterations, and it’s quite annoying that to apply the results of their experiments to production, we need to go through a long and painful process. The tools, practices, and culture required to change that are yet to be created. For Polidea, it’s an exciting place to be at, as we’ve always focused on the productivity of the development process and have been at the forefront of this trend.

Mobile

It’s going to be an exciting year in the mobile industry. We constantly hear interesting news like the ones about more foldable devices (which will bring various challenges to app design and architectures), like smart bands having access to Google Play or last years’ announcement of iPadOS—which means more focus on Apple tablets and their user experience. But there are a few topics I find particularly exciting.

Kotlin Multiplatform

Kotlin Multiplatform (known as MPP) is a project aiming to share the business logic of the app, instead of being yet another cross-platform UI framework. Jetbrains is very active in the field and creates libraries to enhance the development of MPP’s features like serialization, coroutines or HTTP (Ktor). Jetbrains is not the only company that keeps investing time and effort to make it better. There are other more prominent players involved in this project. We observe companies like Square or Touchlab that push MPP forward.

MPP library seems like a perfect match for BLE connectivity. It could allow you to write code once and run it on multiple platforms while using nice coroutines API and get rid of nasty bridges that may decrease performance. This has the potential to become a game-changer.

An Approach to UI

Great minds think alike—it is fascinating to watch how every major platform pursues the same way of writing UI. Starting with React Native, which had changed the mobile apps development landscape, through Flutter, SwiftUI, and Jetpack Compose.

The SwiftUI offers us a new approach to UI development and will probably be widely used in the iOS community in the nearest future. However, there’s still much to be discovered and tested by the developers community, and good practices are yet to be developed.

Android’s SwiftUI equivalent—Jetpack Compose is also worth mentioning. Currently, it’s still under active development (it’s not even alpha yet), but it’s already a hot topic in the Android community. Google promises to release the beta in 2020, but we’ll see what the future brings. In general, a declarative UI is going to be a huge change, without a doubt.

Let’s have a look at the developers’ favorite, Flutter. There is a massive hype about this technology, and it is understandable. In many ways, Flutter seems to be easier to work with and more dev-friendly than React Native. Google puts a lot of effort into the default way of developing mobile apps and broadens this ecosystem by working on releasing a stable version of Flutter on the web.

Operating Systems

When it comes to OS, there are two big subjects to take a closer look at. First of all, Huawei announced its operating system, which hasn’t received a warm welcome. They also declared that they would like to stick to Android as long as possible. However, Huawei might start soon releasing phones with the new operating system, and it will lead us to an entirely new platform on the market. It can be a game-changer for some markets.

Secondly, there is an almost mythical Fuschia. A new operating system developed by Google. There is no official information, no roadmap. We basically know nothing about it, except for some rumors and futuristic scenarios. However, what we do know is that if Google decides to release Fuschia and replace Android, it would be the end of the market as we know it.

text

Embedded

Safety & Security

Increasingly complex MCU solutions and the need for connectivity between multiple devices via wireless protocols makes it harder to write secure and safe code. As IoT devices become more popular, potential attacks are more severe and could leak sensitive data or cause other harm. In 2020, we will see a greater focus on network security, data security, privacy, and other areas related to the code quality in embedded software.

Alternative Programming Languages

Future embedded development will be more open to other programming languages than C and C++. The Rust programming language is an excellent candidate, considering our increasing focus on code quality and its safety. Community-created Embedded Working Group is focusing on bringing Rust to the embedded world, and most important pieces are already in place for early adopters. Another option is MicroPython implementation. It seems very suitable because MCUs are getting more powerful, and there are even more programmers with high-level programming language familiarity.

RISC-V

Open RISC instruction set architecture may bring more power-efficient and cheaper MCU solutions in the near future. The openness is a significant argument as developers are more keen on working with open-sourced tools and compilers. RISC-V is also a good candidate for multicore solutions and specialized computation due to its flexibility.

Machine Learning & Edge Computing

An increase in MCU performance, an emergence of specialized cores, and a need to process a large number of data favor local processing. The goal is to process data as close to the source as possible to reduce latency and reduce power-inefficient communication between devices and the cloud. The introduction of machine learning can solve a lot of problems, which were previously computed by external systems.

Conclusion

We live in absolutely fascinating times. The future of technology is thrilling. Every year brings something new and is revolutionary in a sense. 2020 seemingly will bring a huge change in the rapidly evolving cloud industry. Mobile and embedded industries are moving fast as well, and they aim to be essential to the end-user experience. All of that will influence many other areas like automation and AI availability and capabilities, edge computing, and more. It will force us to rethink our approach to technology and the way we use it.

Acknowledgments

This article wouldn’t have been created without the input and support of my friends and coworkers: Jarek Potiuk, Darek Aniszewski, Katarzyna Kucharczyk, Paweł Byszewski, Darek Seweryn, Michał Zieliński, Marta Woldańska, Michał Mizera, and Przemek Lenart.

Share

Maciej

CTO

Did you enjoy the read?

If you have any questions, don’t hesitate to ask!

Did you enjoy the read?

If you have any questions, don’t hesitate to ask!