We're building a computing platform

A quick update on the vision for EasyMorph:

We’re building effectively a no-code data computing platform (in a general sense). EasyMorph gradually gets all the essential components of a complete data computing platform. We’ve created a very versatile visual workflow language with loops, subroutines, conditional branching, automatic parallelism, exceptions, error handling, and even mutexes. We’ve created a fast, highly peformant runtime to run a high number of workflows in parallel. So the computing part of a platform is already here and it’s pretty solid.

A file store is here too. EasyMorph Server can already work as a file server. Upload/download/manage files and folders on EasyMorph Server not only from Windows but also from Linux or any OS that supports HTTP. I’d like EasyMorph Server to be able to work as an SFTP server at some point, but that’s not on the roadmap yet.

Now the data persistence part starts getting shape. EasyMorph has got a built-in key-value store, the Shared Memory. A key-value store is essential for a computing platform.

More data storing capabilities will come next year.

You may question - what’s the point of building a complete data computing platform? A short answer: because we want to make your life easier. We want you, as a data professional, to have the ability to get things done quickly without drowning in technical documentation, without paying through the nose to the current IT oligopoly, and without worrying whether processing a thousand more rows or running your workflow five more times would break your annual budget.


Many thanks dgudkov! I’ll check it out.

Hi Dmitry,

Thank you very much for making a robust platform like easymorph and helping data professionals improve their productivity and effectiveness. We wish you all the very best for your Vision.



What does this mean exactly ? There’s a lot of pressure to move to cloud and use cloud services for ETL. How will EasyMorph make the difference with these offerings ?

Thanks !

Thank you, Navin!

Briefly speaking, computing in a general sense is a combination of:

  1. Storing data
  2. Moving data from one store to another store
  3. Calculating new data based on existing data

So far, EasyMorph has been focusing on (3) and (2). Now we're adding (1). For storing data, again in a general sense, there are 3 typical options:

A. File store
B. Key-value store
C. Relational database

So (A) already exists in EasyMorph (as Files in EasyMorph Server). (B) also exists as Shared Memory. The only thing that is remaining is (C), a relational database. We won't design a brand new database of course, but at some point, we will be adding a lightweight built-in zero-administration database based on a popular open-source DB. It will also have a data editor to allow editing data in DB tables.

This will make EasyMorph rather complete as a computing platform. It means that basic operations with tabular data - storing, moving, computing, and editing will be possible to do entirely in EasyMorph and its security model.

You can think of EasyMorph as your private micro-cloud, if you wish. I'm a firm believer that public clouds only exist because of their convenience but also create a lot of problems that didn't exist. However, software can be convenient without being cloud-based.

What is the stated rationale for a move to the cloud? If you can elaborate on that, it would be easier for me to answer your question


The rationale for most IT managers is the scalability and elasticity of the cloud. Also the pay as you use seems to seduce them to cloud services. There are a lot of data related services offered by cloud firms.
Personally I think that EasyMorph has its value as solid graphical ETL tool. I have not seen other tools that are as easy to work with.
Personally I think it can be a challenge to combine all different sorts of cloud services for a total solution.
How does EasyMorph will distinguish itself from for example azure Data Factory, Azure Synapse, Azure fabric, Azure purview, etc ?


I didn’t make myself clear. I understood that you’re under pressure to move to the cloud. I’m curious, what is the rationale of those people who pressure you for such a move? Let’s analyze your concrete case, not the rationale of some abstract “most IT managers”.

There is no and will never be a universal “one for all cases” correct answer. It’s like asking “What’s the best BI tool?”. Every situation is different. Different organizations have very different needs including needs in elasticness and scalability. For some workload doesn’t really change, for some, it changes by two orders of magnitude over a year.

So why do you consider moving to the cloud (if you really do)?


In first instance, the move to the cloud is for infrastructure flexibility but then rapidly the debate about the data related products it offers arises.
The question then is why one should choose a desktop ETL-tool hosted in the cloud above cloud services which also offer ETL-capabilities. Often these products are offered via web applications.

I am a passionate EasyMorph user because I think it is a solid product but I would like to have a better view where EasyMorph is different from cloud services for ETL en why a separate product could be beneficial over these products.
How can EasyMorph fit into a cloud architecture.? Are there architectures that use multiple tools (e.g. EasyMorph besides a cloud tool for ETL) and what are the pros and cons of such architectures ?

Sorry if I am not expressing it crystal clear :slight_smile: but I experience these kind of debates now and then.

EasyMorph has tons of benefits compared to cloud-based ETL:

  • Ultimate insight into data. Being able to see and analyze the full output of every transformation step is a big deal (here is a good video that explains why). Snappy UI with reactive calculations. Cloud ETL doesn’t have it.
  • Flat predictable pricing with a guarantee of no price increase. Unlimited volume of data, unlimited runs, unlimited workflow steps, etc. No cloud ETL vendor can give you that.
  • Privacy - all data is stored on computers that you control and never leaves them.
  • Performance. Many of our customers use EasyMorph because they were not happy with the performance of cloud ETL tools.
  • Power and versatility. No other ETL tool (yet alone a cloud ETL tool) provides such a broad range of actions together with complex workflow constructs such as advanced loops, exceptions, conditional branching, mutexes, and a built-in key-value store.
  • Automation as an integral part of ETL. Most ETL tools only focus on data transformation and can’t perform file operations, web requests, send emails, etc.
  • Ease of use and administration. Enterprise clouds are extremely cumbersome and frequently user-hostile. EasyMorph Server is easy to administer.
  • Cloud-agnostic. If you decide tomorrow to switch from Azure to AWS (never say never), or from Power BI to Tableau, you will need to throw away all the workflows you’ve designed in Azure. EasyMorph reduces your cloud lock-in because it’s cloud-agnostic.
  • The possibility of involving non-technical users in workflow prototyping, designing, and maintaining. The self-service capabilities of EasyMorph make it a common “data language” for IT and non-technical people. Forget about it with cloud ETL - non-technical people can’t and won’t use cloud ETL tools.

It fits into a cloud architecture pretty well. EasyMorph works with all major cloud databases, with all cloud file stores, on any cloud VM that runs Windows. EasyMorph Server tasks can be triggered from any cloud application via a webhook, and, in turn, EasyMorph tasks can perform various operations with cloud apps and services either natively or via an API.

1 Like

A post was split to a new topic: Are there any plans in the roadmap for building non Windows-VM servers?

We continue developing EasyMorph in this direction. Now it's taking a more concrete shape. We've also decided to adopt the term "hyper-automation platform" for what we're building. Here it is at a glance:

  • :white_check_mark: Data prep and enterprise ETL
  • :white_check_mark: Web (REST) API server
  • :hammer_and_wrench: Data exploration and collaboration (boards)
  • :hammer_and_wrench: Work management (semi-automated issue creation/management)
  • Task triggers
    • Examples of triggers: scheduler, file appears in a folder, another task fails, email received, web API response changed, rows added to database table, metric below/above threshold, delayed events, etc.
    • External webhooks will also become triggers
  • :hammer_and_wrench: Collecting data (files and forms) from external people/organizations
  • Sharing data (files, datasets, metrics) with external people/organizations

:hammer_and_wrench: = work in progress.


Wow Dimitri! Impressive work you and your team a doing. And I really agree that you are moving in the right direction!
I got very curious about the data collection part you are mentioning. Do you have any more thoughts to share (or do you already have shared something?)?
We are seeing a need for that type of solution and have looked at other platforms for frontend.
Please let me know when you have anything to share!

Here you go @emilgotting :slight_smile:

1 Like