Powerful avoid-that-employer sign?

There are so many formal characteristics to consider when choosing a new employer: The development opportunities, the team, the boss, the processes, the location, remote work, growth plans and stability… oh, yes and the financial part of course, but this write-up is not for those that are ready to sacrifice their dream job for a larger check.

There are also soft characteristics to watch for: are the interviewers smiling, are they relaxed or stressed, is the interview flow more like Q&A or it is friendly discussion, are you judged by a computer or human, are you represented in front of your potential team or there are layers of people between the interviewers and your actual team.

These are all important characteristics, but this write-up will focus on one relatively new sign, which although subtle, is a composite indicator, quite strongly related to almost all company characteristics: The advance notice period. Yes, that’s right, the period stating how many months you are obliged to notify your company in advance of your departure. This little clause, taking almost unnoticeable place in your draft contract may tell you succinctly so much about the company.

We are living in a time of scarce educated workforce, which is almost always unavailable and rare. That is why, employers explain, it is necessary to raise the norm of 1 month advance period to 2 or more months. Thus the employer is supposed to be able to find suitable replacement for the leaving personal. Sounds logical?… at first glance, but let’s argue about it.

Imagine a company that knows how to build teams and cares about its people. The company knows exactly John’s role and contribution to the overall success: what are his professional qualities, strong and weak sides, where he fits best, how he feels today, what are his professional desires.

Such company would definitely notice when John starts to feel unhappy, and act long before he starts thinking of leaving, e.g. try to offer another opportunity matching better John’s expectations.

Such company would structure its teams so that there are no single points of failure. Tomorrow John decides to take parental leave – no problem – what’s the point of forcing a person to literally wait for 2, 3 months before he is able to pursue his new dream?

Such company, be it even a large enterprise, would not measure its success by the headcount of this or that office. Usually such companies argue that if John leaves, his headcount would be closed and the office would be forced to report lower headcount as compared to other company offices. Do you want to work for a luxurious office of prestige company, if you are treated as a headcount, if the office success is measured in headcounts, not in useful innovative products?

Lastly, when you see higher advance notice period, check the attrition rates. There is a chance that the company is just foolishly trying to lower its abnormal attrition. Question any number above 5% for large companies and 20% for startups.

I don’t understand the rising trend of the advance notice period. For me this is not fixing the core issues, not addressing why people are leaving, but are just trying to cover it all up in a naive way. On the other hand there are companies that care about their employees and treat them with utmost respect as people which are integral part the success. Usually such companies won’t hold you even for a day if you feel unhappy.

So next time you search for your next great adventure, you may also want to consider the notice period.


Data Analytics with Zero Latency and High Precision?

Everyone is “doing analytics” these days

Data Analytics is an IT buzzword. Hundreds of paradigms and solutions: change-data-capture, ETL, ELT, ingestion, staging, OLAP, data streaming, map/reduce, stream processing, data mining,… Amazon Redshift and Lambda; Apache Kafka, Storm, Spark and Hadoop Map/Reduce; Oracle GoldenGate; VMware Continuent, … gazillion of offers. All this hype makes it easy to loose track.

The Problem

What is the problem that all these solutions aim to solve anyway?

The business needs precise and rapid answers to simple yet critical questions. That’s all to it. How it can be achieved is a longer story.

Lets draw a real live analogy: Imagine that you are a couch and your business is to train a player for incoming competition. Unfortunately your top player starts to feel sick. You immediately grab him and go to see a doctor. At the doctor’s office you shoot with concise critical question: “Will my player be fit for the competition?”. Doctor’s answer is not that short: “Well, for an accurate assessment, I will have to run several urine and blood samples, do an EKG, ultrasound, chest X-rays and maybe an MRI. Your player needs to stay at the hospital for a couple of days, and avoid exercises as they wildly vary test results rendering analysis hard. We will then correlate all the data and get back to you in a few more days.”. You stare in disbelieve: first this doc is offering me to suspend training, right before the event; secondly, answers will come too late. No way!

Classical ETL

As ridiculous as it seems, data engineers often treat customers with offers full of latencies, lack of consistency or, worse, consistency on the price of downtime.

Various tools from Pentaho PDI to Scoop + Hadoop M/R… more or less classical extract-transform-load (ETL):

  • Proprietary scripts to export operational data into a set of CSV files (Hopefully the engineer knows how to encode incremental exports).
  • Logic to import the CSVss into the ETL engine with all imposed disk IO.
  • More logic to apply the actual analytical functions.
  • More scripts to load the results again into the analytical/reporting data store.
  • The result is a complex multi-step process spanning over vast volumes. This yields latency. The moment a change in the operational store is propagated to the reports, it may be already too late.

There are more hidden perils:

On each export, the conveniently available database integrity and type checks are lost. The developer needs to manually encode them. E.g. explicitly set data types for all CSV properties; encode checks for invalid value ranges, etc. Otherwise there is a great risk of data quality issues in the reports.
Since typical ETL tools apply transformations in-memory, costly disk swapping is involved for larger data sets.
Even a single in-flight problem causes restart of the entire lengthy job.
As latent, complex and error-prone as it is, the classical ETL process often lacks consistency. If correlated data is modified concurrently during export, the CSV files may contain inconsistent “relations”, e.g. employees without department, as the missing department was added in the database after the export had finished with the “department” table, but before the “employee” table was exported. Of course you can employ consistent native database tools such as Oracle’s data-pump or redo-log mining, but to integrate that tooling in the general data-flow increases the effort and complexity.

Stream Analytics

With all the data pooring in operational stores from IoT and the 24/7 global Cloud exposure, there is increasingly vaster data volumes that are screaming to be analyzed. The industry is responding with an approach that better suits enterprise scale – data stream analytics.

In summary, changes are captured as they occur and streamed to a scalable parallel processing engine. Incoming changes are analyzed immediately through delta aware functions and stream transformations. Results are merged (delta-aggregated) into the reporting store. There are a number of stream-aware frameworks that facilitate such process: Apache Kafka, Spark and Storm for example. Combination of such tools provides low latency, yet does not guarantee consistency for high precision decisions.

Developing robust and efficient stream analytics could a be very challenging task. One needs to integrate or even implement from scratch an efficient change-data-capture solution. Postgres for example has immature log mining technology. Captured changes need to be correlated to ensure referential integrity and transactional consistency. Choose scalable yet resilient computing framework able to overcome failures during stream analysis. Glue all systems together into coherent easy to use package and figure-out how to delta-merge the results into the analytical store.

“Hibernate” for Data Analytics

Remember Hibernate? The tool that revolutionazied the engineering of persistence layers – easy to learn, massive savings on boiler plate, yet error-prone persistence code. Well, for the sake of objectivity, it also brought sometimes lack of fine-grained SQL control.

We at DataStork believe it is about time data analytics to benefit such automation… yet keep fine-grained data crunching control when needed.

Meet the DataStork way, “Hibernate” for data analytics:

Data analysts encode questions by using plain old SQL (Geeks can still use various languages to encode complex analytical functions).

  • We analyze encoded queries and deploy agents at the relevant data sources to capture the data changes.
  • Captured changes are streamed in efficient compressed form.
  • Defined questions/transformations are applied over the stream by using highly scalable and robust parallel computing framework.
  • Data operations are kept as close to the database data as possible, to avoid unnecessary disk IO and leverage existing type-info and constraints.
  • Entity relations are regarded both on the operational and analytical databases to ensure fully consistent results both transactionaly and in terms of referential integrity.
  • Data analysts can inspect and adjust each of the generated SQL scripts for fine-grained control.

DataStork automates all aspects of modern data analytics through combination of innovative EL-T and stream analytics, ensuring 0 latency and high consistency. This approach also works with legacy relational databases.

Now you would know right-away whether you are fit to win the next major competition… because at the time you walk in the “doctor’s” office all the information has already been analyzed and the needed answers are waiting for you.

You are welcome to get in touch for more details.


Technical Matrix

Experienced engineers are those that do not head-jump on the solution with hipster next-great-thing technology approach, rather carefully evaluate the unique customer needs and apply the right technology combination for optimal results. Like an experienced craftsman, each DataStork engineer masters vast palette of tools and knows which ones to pick for the job.

Get a summary of the technologies that DataStork masters: DataStork-Expertise


Case Study: BigData for Logistics Optimizations

For Logistic companies to reduce delivery times and expenses is a way to survive. Enterprise Logistics may require massive computing power and still discover the trends after days of data crunching.

Learn how DataStork helped a leading Logistics-Optimization company to scale its analytics. This helped them accommodate more clients and serve faster responses to business critical questions: DataStork-BigData-Case-Study