Event 05.03.2020 Microservices Meetup Rhein-Main

Event 05.03.2020 Microservices Meetup Rhein-Main

February 9, 2020

Rhein-Main Microservices Meetup we are coming!

Our CTO Dominik Fries will speak on the 05.03.2020 about “Distributed tracing with @Kafka and Zipkin”. Learn more about event driven architectures with microservices and how to monitor them! The event takes place at codecentric AG Kreuznacher Str. 30, 60486 Frankfurt am Main and starts at 18:00. Afterwards there will be pizza, cold beer and much time for networking. We are looking forward to seeing you! Register here: https://lnkd.in/d5tDMvR

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Gerne unterstützen wir Sie bei der Migration oder Neuentwicklung von innovativen Anwendungen in der Cloud. Hinterlassen Sie einfach Ihre Kontaktdaten und wir melden uns umgehend!

Logo_Whiteborder

Workshop “Data Engineering for Managers” in Berlin mit Birds on Mars

Workshop “Data Engineering for Managers” in Berlin mit Birds on Mars

February 9, 2020

Bahn Academy

Our partner #birdsOnMars invited our CTO Dominik Fries to a great workshop about “Data Engineering for Managers” at the #bahnAcademy in Potsdam! It was a pleasure to introduce the inspiring managers of the DB to this important topic, especially together with Florian Dohmann, Hartmut Wilke and Christoph Bauer

Thank you!

Also a special and warm thank you to @Christian Holz for the organization and guidance during the workshop.

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Gerne unterstützen wir Sie bei der Migration oder Neuentwicklung von innovativen Anwendungen in der Cloud. Hinterlassen Sie einfach Ihre Kontaktdaten und wir melden uns umgehend!

Logo_Whiteborder

Philipp D’Angelo gewinnt den Health Insurance Hack 2020 in Leipzig

Philipp D’Angelo gewinnt den Health Insurance Hack 2020 in Leipzig

February 9, 2020

HIHC 2020

Philipp D’ Angelo setting us on the path to a new location: #Leipzig. His very first community meeting #HIHC2020 where, together with a dedicated team, he won the Prize for the best innovative health concept.

We look forward to many more exciting activities and projects in this amazing city! Leipzig, we are coming!!

Expand your knowledge by always expanding your community

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Gerne unterstützen wir Sie bei der Migration oder Neuentwicklung von innovativen Anwendungen in der Cloud. Hinterlassen Sie einfach Ihre Kontaktdaten und wir melden uns umgehend!

Logo_Whiteborder

The 6 Most Important Things I have learned in my 6 Months using Server-less

The 6 Most Important Things I
have learned in my 6 Months
using Serverless

AWS CLOUD LAMBDA

Finding the right tools is of utmost importance in the world of serverless. October served as an eye opener to me and my company. After the Serverlessconf tour, I decided there and then that my company would operate on Serverless.

The first two months were a nightmare as we struggled to familiarize with the trend. Six months down the line, we are now deploying a major project servelessly, the fourth one in the company.

This article explains how we did it and provides the six most important lessons we learned in the process.

  1. The middle layer has to go

We have been developing web apps forever. This is one of the reasons why it took us a considerably long time to realize a very obvious advantage of Serverless.

Some of our first web apps still had a Node Express layer. This layer remembered session state either by accident or by the tragedy of design where we misused DynamoDB to make it remember sessions.

During the first phase of the transition, the middle layer served as a web server on Lambda. We later came to discover that this was wrong and terrible. The result was HTML pages full of JavaScript calling REST APIs. This approach that we employed was unmaintainable, but in the end, we managed to get rid of the middle layer. This is what should happen in Serverless.

  1. Love DynamoDB

Getting your grip around DynamoDB is arguably the hardest aspect of using Serverless. You encounter a few difficulties in the first interactions. Most of the time you want to go back to the more familiar RDS. Structured Querry Language (SQL) has been of great support for as far as I can remember. I have also put a lot of business logic into databases. RDMS systems, on the other hand, fail to scale well and do not support the concept of agile systems that have evolved organically.

DynamoDB is on a whole new level. Among the benefits of getting the NoSQL database right include massive scale, very fast performance, and zero administrative overhead. The table fields in DynamoDB should not contain empty strings. You will be forced to start over again when you get the partition and sort keys wrong. Emulating the SQL queries too closely can lead you from having few to very many tables.

After several attempts and finally succeeding with DynamoDB, I learned that:

  • Do not just jump into it without getting the facts right. Understand how it works first, lest you get disappointed and move back to RDMS.
  • It has extremely powerful tools. You can use streams to attach code to table events.
  • It can feed other storage systems. It can also be used to protect other databases from enormous data volumes.
  1. Authorization is key

Traditionally, you only had to authenticate yourself and then track them by following a session ID. The session ID controlled access. This was time-saving as you only needed to do the hard work once. But there are problems associated with this approach. First, it only works if the server is in the middle. In this case, the server has been burned to the ground. Second, it exposes you to malicious attacks like CSRF. It also makes passing identity to other services very difficult.

Nobody likes the CSRF attacks. This is where the JWT token comes in. Here is an illustration of how it works;

  • Step 1: get a JWT. You are awarded a JWT token after the authorization process.
  • Step 2: use it to communicate with the service you write

    Below are some reasons why you should use JWT;

    • The client can talk to more than one serverless activity
    • Every request is authenticated
    • It is secure.
    • It is anti-monolith,
    • It is CSRF-free

    The only thing needed from your serveless code is a Custom Authorizer to determine if the header is valid.

    However, JWT makes all other types of auth look complex. Therefore, we shifted to Auth0. Surely, serverless is extremely simple and very effective.

  1. Bye Python

While Flask was a nice framework in the traditional days, it is not suitable for the new world.

To support interactions, you will have to move more work to the client’s side. This will leave you with no other option but to use JavaScript. The result is always inlining into your Python templates.

Flask solutions have increasingly become a clumsy mess with different inefficient languages. After turning to Node, things were much more convenient, and I could use only one language. A simple Node configuration could help you use ES6 to get rid of JavaScript constructs. And that explains how we ditched Python to join the JavaScript bandwagon.

People using Python will boast about the impressive language features, but that is nothing compared to the charms of JavaScripts.

  1. Enjoy the Vue while it lasts

I was introduced to React when I entered the world of Single Page Applications. React is the most popular approach to developing user interfaces. It is great, but some of its drawbacks include a learning curve that is too steep, Webpack set up and JSX introduction. We chose to explore alternatives since it was too heavyweight for our immediate use.

I soon discovered  Vue.js and my life using serverless turned around. You can learn all about Vue in just a few days. Other significant advantages of Vue include:

  • Everything is a component which manages its own content, design, and code. This makes it easy to manage our clients’ projects, which are quite a huge number.
  • You are provided with powerful debugging tools, a Webpack and great organization from the open-source JavaScript framework.
  • Vue allows you to develop a desktop application within the browser. This way, you can improve the user experience.

 

In the old world, we would be deploying apps through Elastic Beanstalk. We would then monitor them for utilization and manage infrastructure. In SPAs, however, when you deploy an application, you are copying index.html, bundle.js, and some file dependencies to an s3 bucket which is front-ended by a CloudFront distribution. This grants you steady loading and distribution behavior. It also enables multi-version management.

There’s zero app infrastructure management.

  1. Serverless framework

My experience with Lambda in the first days involved coding directly into the AWS console. I often got stressed up as it took lots of work and mistakes to get some tasks done. It took me quite some time to discover the absence of the bridge connecting my IDE to the production environment.

All along, the solution had always been a serverless framework. Just a sls deploy deploy bundles up your precious code and ships it directly to Amazon’s brain. In case your code is misbehaving, and you need to check logs, type sls logs -f functionname -t.

This is a wonderful experience and the serverless people’s contribution should be applauded.

 

Cheers to new beginnings!

After attending A Cloud Guru’s serverless conference, we felt that this was clearly an unexplored area with limitless potential.

During our first experiments, we had our fair share of failures. The results weren’t satisfying. We have started to officially deliver projects in a 100% serverless way months after getting the right stack in place. All the migration difficulties and failures along the way were worth it.

We have commenced on a journey of building SPA apps which use serverless infrastructure and scale effortlessly. The apps also cost 70-90% less and the payoff is incredible. Serverless technology is going to revolutionize the delivery of applications in the cloud.







Weitere Beiträge

Bei Interesse an einer Partnerschaft, kontaktiere uns gern!

Reasons Why DynamoDB is Not for Everyone

Reasons Why DynamoDB is Not for Everyone

AWS CLOUD LAMBDA

Amazon Dynamo was created in 2004 to scale the growth of Amazon’s Oracle database infrastructure. The aim behind its creation was to meet the business’s requirements (scalability, performance, and reliability).

In 2012, the availability of DynamoDB as a fully managed NoSQL data service was announced by AWS. AWS promised that it would have seamless scalability.

Why choose DynamoDB?

I interviewed a number of developers and engineers about their experience using DynamoDB. Even though this database service has many success stories, it has left behind many failed implementations. To fully understand why DynamoDB succeeds in some areas and fails in others, you first have to learn about the tension between two of its greatest promises- scalability, and simplicity.

DynamoDB is simple to use until it refuses to scale

Throwing data in DynamoDB is the easiest thing you can ever do. It is less complex as you don’t have to be worried about logging in and setting up a cluster- all thanks to AWS. To start operating this service, you just turn a knob, look for an SDK and sling JSON.

However, as much as DynamoDB is simple to interact with, designing its architecture is a difficult task. It works well during retrieval of individual records that may be based on key lookups. Where complex scans and queries are involved, there is a need to carry out indexing carefully. This is a must even if the amount of data isn’t huge and you are familiar with the design principles in NoSQL.

Most developers know a lot about classic relational database design but not much about NoSQL. A combination of inexperienced developers, the absence of a clear plan on modeling a dataset in DynamoDB and a managed database service is a recipe for failure.

First Law of DynamoDB

The first law of using DynamoDB is to assume that its implementation will be harder compared to employing a relational database you are well-versed with. At a small scale, a relational database will accomplish each of your need. Setting it up will initially take a long time compared to DynamoDB. However, the well-established SQL conventions will save you a lot of time in the long run. This is far from the assumption that DynamoDB technology is awful. It is because it is new to you.

DynamoDB can be scaled – until it’s not simple

For this article, I interviewed a few DynamoDB happy customers. DynamoDB promises great performance at an infinite scale which is only limited by the size of the AWS cloud. The customers are right in the center of doing key-value lookups on well-distributed records, avoiding complicated queries and limiting hotkeys.

DynamoDB is well-known for dealing with hotkeys and this is explained in detail in the DynamoDB developer’s guide documentation. Although it can scale indefinitely, data is not stored on a single server. As it grows larger, it is divided into chunks, each on a different partition.

Despite DynamoDB being able to scale indefinitely, your data is not stored on one, ever-expanding server. What happens is that the capacity of a single DynamoDB shard is divided into parts as your data increases. Therefore, each part lives on a different shard.

If you have a hot key in your dataset, you must ensure that the allocated capacity on your table is set high enough to handle all the queries.

With DynamoDB, you can only provision its capacity at the entire table level. You cannot provision its capacity per partition. By use of a fairly wonky formula the capacity is divided up among partitions. Consequently, your capacity for reading and writing on any record becomes smaller. If your application has too many RCUs on one key, you can do three things; over-provision all other partitions which are rather expensive, generate errors or decrease access to the key.

One thing to note however is that DynamoDB is not suited to datasets that are a mixture of hot and cold records. But at a large scale, each dataset has a similar mixture. You can split the data into tables, but you will end up losing the scalability advantage of DynamoDB.

A recently published article on  “The Million Dollar Engineering Problem” showed how Segment decreased their AWS bill. It did it by fixing the DynamoDB over-provisioning. Alongside the article was a heat map graphics that showed the partitions that were troublesome.

The graphics originate from AW’s internal tools. Their strategy for blocking the troublesome keys was to wrap DynamoDB calls in a try. ‘Segment then had to battle the hotkeys problem, and this is where the simplicity and scalability factors came in.

Designed as a black box, DynamoDB has very few user-accessible controls. When starting, it is this aspect that makes it easy to use. But at the production scale, you need more insight into your data misbehavior.

Second Law of Dynamo

The second law of DynamoDB states that DynamoDB’s usability, at a massive scale, is limited by its own simplicity. The problem here is with what AWS has chosen to expose, not Dynamo’s architecture. Its failure to backup 100TB DynamoDB data was the leading reason why Timehop moved off the service altogether.

What if you can’t use DynamoDB?

First, Let us look at the advantages and disadvantages of using DynamoDB

Pros

  • Handles huge amounts of workloads.
  • Enables you to redesign many applications.
  • Enables you to store state in a K/V table.
  • Enables use of event-driven architecture to suit your desires.

Cons

  • Only suitable for small scale.
  • You can’t redesign all the applications.

Just because you can use DynamoDB does not mean you should use it. Using Dynamo without fully understanding will make you end up spinning your wheels through several code rewrites before you land on a solution that works .

Third Law of DynamoDB

The third law of DynamoDB states that business value trumps architectural design each time.

This is the reason why the various developers I interviewed abandoned NoSQL to provide solutions for both small and middle-sized businesses. It is also the same reason why Timehop moved from DynamoDB to Aurora. This also explains why DynamoDB has lots of case studies from happy customers globally.

The Introduction of WhynamoDB

Amazon will at some point announce the release of WhynamoDB. The decision tree below will guide you on this new service.

Weitere Beiträge

Bei Interesse an einer Partnerschaft, kontaktiere uns gern!