This post is a re-post from my original LinkedIn post.
Over the past few years I had the unique opportunity to see a start-up, TubeMogul, going through hyper-growth, an IPO, and an acquisition by a fortune 500, Adobe. In this journey, I was exposed to a lot of technical challenges, and I work on systems at an astonishing scale, i.e. over 350 billions real-time bidding request a day. It allowed me to build some strong personal opinions on the role of an SRE and how they can help transform an organization. I'm lucky enough to work with a talented team of SRE that keep pushing the limits of innovation while executing through chaos.
As I flew back from the ML for DevOps (Houston) summit that Adobe sponsored, I took the time to reflect on some of the ways our SRE teams excel in their job and how they leverage machine learning and self-healing principle to scale their day-to-day operations.
I.T. Systems, with the broad adoption of public and private cloud, get more complex over time. The hyper-adoption of micro-services and the increase of loosely coupled distributed systems are an obvious factor, though you can see how IoT devices, edge computing, and al. can factor into the mix.
Point being, it is increasingly difficult for a single individual to understand the space in which a product evolve and live. One cannot assume knowing it all. Humans quickly reach their cognitive limit. So, how do SRE overcome this limit? Below is my take on the top 5 machine learning and self-healing techniques used by SRE to scale and operate increasingly complex environments.
Windows 10 provide a neat integration with the Linux kernel which allows you to run any binary from your favorite linux distribution directly from Windows. This feature is called Windows Subsystem for Linux (WSL) and it opened a whole new world of opportunities.
Parsing a comma-separated values (CSV) file from the command line can be challenging and prone to errors depending on the complexity of the CSV file. Though, this is a frequent task in many automation scripts or to quickly process or reformat some imported files. In this post we will cover some of the common way to parse simple files with pure Bash or using AWK, and how to parse more complex CSV files.
In our previous post,
must use subscript when assigning associative array. The documentation mention clearly the requirement for the subscript part of the declaration.
Version 2 of GNU Bash added support for array variables, a.k.a one-dimensional indexed arrays (or lists). Since the version 4, came the support for associative arrays (a.k.a dictionaries or hashtables). Those features simplifies heavily how you can write your scripts and support more complex logics and use cases.
In this post we will review how to declare, iterate over, and check a value of an indexed arrays and associative arrays.