I was thinking again about that bot, who supposedly will monitor unreliable tests for me, and suddenly realized one thing. All examples I dealt with were dialog based. You know, user sends the first message, bot responds, etc. But the bot I’m thinking about is different. Initial conversation indeed starts like a dialog. But once bot starts monitoring unit test statistics and finds something that I should take a look at, he needs to talk first! Microsoft calls such scenario sending proactive messages and there’re few tricks how to make that possible. Continue reading “Sending proactive messages with Microsoft Bot Framework”
Part of my job description is our CI/CD and it kind of implies that I’m interested in keeping the build green. It doesn’t mean that I immediately jump in whenever some unit test fails, but I’m definitely keeping an eye on unreliable ones.
Whenever master branch stays red long enough, this is what starts to happen to each failed test in it:
- Look for test failures history in Google BigQuery (
select Name, Result, count(*)...).
- If test behaves like a random results generator, create a case for that.
- Skip the test in master branch and put the case number as a reason.
- Find out who created the test (
git blame) and assign it back to the author.
Pretty simple. And boring. I can automate that, but it’s not always clear who is the author of the test. After all, people resign, update each other’s tests, refactor and destroy git history on special occasions. I was thinking about doing something with machine learning to solve that, but it feels like an overkill. Creating a bot, on the other hand, who would ask me to double check when it’s uncertain, sounds more interesting and actually doable. Even if I’m never going to finish it.
However, I’ve never wrote any bots before, so for starters I’d like to check what it actually feels like. Continue reading “Playing with Microsoft Bot Framework”