A unified programming model
that fits your cloud needs


Take advantage of the open-source, unifying programming model for implementing your real-time and batch data. Learn about our Apache Beam services!

Apache Beam is an open-sourced, unified model for implementing stream and batch data processing jobs for any pipeline runners and in multiple languages. Together with other members of the community, our committers proudly contribute to the Apache Beam project.

Let’s talk about your project!

Looking for help with customizing your Apache Beam platform? You’re in the right place! Get in touch and discover our Apache Beam services.

What we do?

1

Open source customization

2

Consulting

3

Contribution to open source

4

Backend

"Apache Beam gave the cloud community a completely new way of creating data processing systems. It was exciting to be able to bring this project closer to the users."

Katarzyna Kucharczyk

Lead Software Engineer,
Polidea

3

We have 3
Apache Beam
committers...

100

...who make the list of 100
most active contributors
in the project.

Our Apache Beam projects

Open Source

Apache Beam


A Testing Framework for a Data Processing Tool

Polidea delivered a testing framework that would allow Beam users to check if their solution is correct and performant—which could help assess which parts of the Apache Beam code need fixing or optimizing—and make an easy estimation of which Beam’s runner/IO/filesystem to use with Apache Beam. Running tests in simulated environments would clarify early on which approach might potentially generate the least amount of bugs and problems, as well as save the engineering team’s time and money.

Our clients
are our partners

Have more burning questions about Apache Beam?

We got the answers!

What is Apache Beam used for?

Apache Beam is a data processing framework used to process data in batch and streaming ways on various processing engines such as Dataflow, Flink, Spark, Samza. In Apache Beam it’s possible to write your code in SDK of your choice such as Java, Python or Go.

What data processing engines are supported by Apache Beam?

Apache Flink, Apache Nemo, Apache Samza, Apache Spark, Google Cloud Dataflow, and Hazelcast Jet. Apart from that, you can always execute your code locally for testing and debugging purposes.

What is the role of Apache Beam in GCP?

The history of Apache Beam started when Google donated the Cloud Dataflow SDK to the Apache Software Foundation. Now, Apache Beam is the only way for executing data processing pipelines within the Google Cloud Dataflow ecosystem.

Is there any difference between writing batch and streaming jobs?

Since Apache Beam’s model is unified, you can use one API for writing both batch and streaming jobs. This perk sets Beam apart from other similar solutions, where often batch and streaming jobs have different APIs.

What is the biggest advantage of using Apache Beam?

None of the existing data processing engines has been chosen as a default standard in the industry. In the face of that, Apache Beam is a safe choice for new projects, because it makes a potential transition to a different data processing engine a lot easier.

Apache Beam: Tutorial and Beginners' Guide

Maintaining different technologies is always a big challenge for both developers and business. This is especially important in the big data world. There are so many big data technologies like Hadoop, Apache Spark, Apache Flink, etc. that it is easy to get lost. Which tool is the best for real-time streaming? Is the speed of one particular tool enough in our use case? How should you integrate different data sources? If these questions often appear in your company, you may want to consider Apache Beam.

What is Apache Beam? It’s a programming model to define and execute both batch and streaming data processing pipelines. The history of Apache Beam started in 2016 when Google donated the Google Cloud Dataflow SDK and a set of data connectors to access Google Cloud Platform to the Apache Software Foundation. This started the Apache incubator project. It did not take a long time until Apache Beam graduated, becoming a new Top-Level Project in early 2017. Since then, the project has experienced significant growth both in its features and surrounding community.

Apache Beam Overview

How does Apache Beam work? First, you need to choose your favorite programming language from a set of provided SDKs. Currently, you can choose Java, Python or Go. Using your chosen language, you can write a pipeline, which specifies where does the data come from, what operations need to be performed, and where should the results be written. Then, you choose a data processing engine in which the pipeline is going to be executed. Beam supports a wide range of data processing engines (using Beam’s terminology: runners), including Google Cloud Dataflow, Apache Flink, Apache Spark, and many others. Of course, you can execute your pipeline locally. This is especially useful in case of testing and debugging.

And why Apache Beam is so useful? Before Apache Beam appeared, there had been no unified API in the big data world. Frameworks like Hadoop, Flink, and Spark provided their own way to define data processing pipelines. Beam lets you write your application once, saving the cost and time.

Here are the types of use cases where Beam can prove its value:

  1. Moving data between different storage media
  2. Transforming data into a more desirable format
  3. Real-time data processing (e.g. detecting anomalies in real-time data)