I’ve been looking through the latest Technology Radar issue and here’s what I found in its new Techniques section: “TDD’ing containers”. Wow. Mentally, I’m not yet ready to connect TDD to containers, but I took a look at the tools used for that, and those are quite interesting.
The first one is serverspec, which allows running BDD-like tests against local or remote servers or containers. It looks pretty solid, supports multiple OSs and its only downside (for me) is that serverspec is written in Ruby and therefore doesn’t really fit in the stack I normally work with.
The other one –
goss – leaves an impression of a multitool, which usually worries me, but here… I’m kind of curious, so let’s have a look. Continue reading “How to unit test.. a server with goss”
I’m still busy learning how to troubleshoot .NET Core apps on Linux. Now I more or less understand where to start if I need to analyze .NET Core memory usage, but that’s not the most frequent problem we usually have. Most of the time the main problem is high CPU usage, which obviously leads to the question “What this thing’s doing now??”.
On Windows I’d usually start with analyzing application logs, trace files or performance reports. If that wasn’t enough I’d download Perfview tool and profiled running executable itself. How many threads are there? What functions they execute the most? What their call stacks are? And so forth. The only downside of that tool is that I never read its documentation, so whole troubleshooting experience sometimes resembled a meditation on the monitor, worshiping the holy Google and wild guessing.
While logs and traces are obviously there on Linux as well, I was wondering, can I profile the app in the similar way I would do on Windows? The answer is “Oh, yes”. Monitor, worshiping, guessing – it’s all there.
There’re multiple tools to use out there, but the basic toolkit for profiling .NET Core app on Linux seems to be
perf utility along with
perfcollect. Let’s have a look at all of them. Continue reading “Profiling .NET Core app on Linux”
Most of the last week I’ve been experimenting with our .NET Windows project running on Linux in Kubernetes. It’s not as crazy as it sounds. We already migrated from .NET Framework to .NET Core, I fixed whatever was incompatible with Linux, tweaked here and there so it can run in k8s and it really does now. In theory.
In practice, there’re still occasional StackOverflow exceptions (zero segfaults, however) and most of troubleshooting experience I had on Windows is useless here on Linux. For instance, very quickly we noticed that memory consumption of our executable is higher than we’d expect. Physical memory varied between 300 MiB and 2 GiB and virtual memory was tens and tens of gigabytes. I know in production we could use much higher than that, but here, in container on Linux, is that OK? How do I even analyze that?
On Windows I’d took a process dump, feed it to Visual Studio or WinDBG and tried to google what’s to do next. Apparently, googling works for Linux as well, so after a few hours I managed learn several things about debugging on Linux and I’d like to share some of them today. Continue reading “Analyzing .NET Core memory on Linux with LLDB”
So far all examples I made for Docker in Swarm Mode or Kubernetes blog posts were built around some sort of a service: web server, message queue, message bus. After all, “service” is a main concept in Swarm Mode, and even the whole micro-service application thing has, well, a “service” in it. But what about one-off jobs: maintenance tasks, scheduled events, or anything else, that we need to run just sometimes, not as a service?
Continue reading “One-off Kubernetes jobs”
Much to my surprise, starting from the last week Kubernetes became the part of my job description. It’s no longer something just interesting to try, I actually have to understand it now. And as you probably could tell from my older k8s post, I’m not quite there. The post sort of builds a logical example (containerized web server) but something just doesn’t click.
I was trying to understand what’s missing, and it seems like the problem is in the tooling. You see, there’re two and a half ways to run something in Kubernetes. One is through ad-hoc commands, like
kubectl run or
kubectl expose. They are simple, but they also skip few important concepts happening in the background, so the whole picture stays unclear. Continue reading “Dissecting Kubernetes example”
I’ve been using Gitlab CI for a while now and until certain point it worked really well. We had three build servers (GitLab runners) in the beginning, and when number of teammates or build steps and therefore commits and build jobs increased, I’d just add one more server to handle an extra load and felt that problem was solved.
Not for long. When number of servers climbed to more than ten, it became obvious that simply adding servers one by one doesn’t work anymore. It was both expensive to have all of them running all the time and it still wasn’t enough to handle occasional spikes of commits. Not to mention that during the nights and weekends those servers were doing absolutely nothing.
The whole thing needs to be dynamic and fortunately GitLab CI supports autoscaling out of the box. Documentation is a little bit confusing but in reality it’s very easy to get started. So here’s the plan: let’s try it!
Continue reading “Autoscaling build servers with Gitlab CI”
Application deployment strategies are really evolving fast. While containerized applications still look hot, there’s something even more interesting happening. What if instead of dealing with application containers we’d got rid of redundant shell and send application functions directly to the cloud? Sounds insane, right? Yet all major cloud providers have function-as-a-service (FaaS) functionality, which along with object storage and database services is enough to build a fully functional web application without a server – a serverless application.
Of cause, there’s still a server somewhere. Maybe many of them. But this time we neither know, nor care about them. Continue reading “Another shiny toy – serverless application”
I’ve been talking to one of our security guys recently about providing my piece of software with secret certificate and in the meanwhile keeping that certificate out of my hands. Apparently, managing application secrets is not an easy task. Later that day I checked out one of the tools that supposed to make such tasks simper – HashiCorp Vault – and was quite impressed. I didn’t realize how big the problem domain is, and how many tools and tricks you have to consider in order to build a solution for that. Today I want to go through the basics of managing secrets with Vault and hopefully highlight few things what impressed me the most.
Continue reading “Keeping application secrets with Vault”
Seeing how easy it was to provision one VM with Ansible, I can’t stop thinking: would it be as easy to deal with the whole cluster? After all, the original example I was trying to move to Ansible had three VMs: one Consul server and two worker machines. The server is ready, so adding two more machines sounds like an interesting exercise to do. So… let’s begin?
Continue reading “Provisioning cluster of VMs with Ansible”
I’m still looking for ways to automate hosts configuration. So far I’ve been using Vagrant + bash/PowerShell for configuring Linux or Windows hosts, but somehow I managed to miss the tool designed specifically for tasks like this – Ansible. It’s been around for last five years or so and became almost a synonym to “automatic configuration”. Today I’ll finally give it a try and see what difference it makes to use it comparing to provisioning with good old Bash.
Continue reading “Provisioning Vagrant VM with Ansible”