You can write unit tests for COBOL programs and get fast feedback without having a dependency on the mainframe.
Kicking off XConf Online, was Michael and Felix’s talk ‘Redefining the unit’. Michael and Felix discussed their journey of developing an automated testing tool for COBOL programs, in the context of a large insurance company. Michael and Felix shared insights into a user centric approach, the importance of adapting your CI processes and how to gain fast feedback without being dependent on the mainframe. Watch the recording to find out what conclusions and learnings they made whilst developing this testing tool.
Docker container security is simple, so there’s no reason not to do it.
During the morning session, Monica and Marina honed the importance of securing your docker container and environment at multiple levels; from your build pipeline to application layers. Docker security needs to be addressed holistically and requires continuous vigilance, helping to reduce vulnerabilities across an ever-growing attack surface. If you are unsure where to begin, check out their talk or start with Threat Modelling – a useful process to help identity threats and prioritise possible mitigations.
Everyone has a part to play on the path to production.
As a tech lead, Manasi outlines a list of technical and non technical practices she brings to each project. Starting with your path to production (normally a phase which is addressed at the end of the software lifecycle), through to ‘don’t take your business hat off’. Manasi’s talk offers practical advice you can employ within your own work.
Why should we avoid Null values and stop abusing exceptions?
Mario and Andrei set the scene with their two key takeaways ‘let’s stop using null values’ and ‘let’s stop abusing exceptions’. Code that throws an exception (or error) every time something unexpected happens is hard to understand and more difficult to maintain. It is common to use data types such as Option, Either or Validated to make assumptions when dealing with errors, which can be verified with a compiler. Mario and Andrei highlighted how they have been doing this in Kotlin, with the help of the Arrow library. Watch the recording here.
Don’t stop an experiment too early to make conclusions.
You follow agile practices and deploy frequently, but face uncertainty when it’s time to release. Once live, your platform underperforms and you can’t tell why. Irene and Klaus outlined the technical foundations and organisational setup to experiment with and to learn from your users, allowing you to make decisions based on real behaviour instead of best guesses. They demonstrated how to have more scientific rigour in your software development cycle to validate your hypothesis. Watch Irene and Klaus’ talk and find out more about their techniques and go live with confidence.
Why your coverage is a lie and how to learn to write better tests.
Towards the end of the day we had Chris Shepherd’s fascinating talk on mutation testing. Chris reviews the conventional testing pyramid and modern approaches to testing software, cautioning that this type of testing could lead to false positives. He explores how mutation testing can help fill in this gap, by siloing out so-called ‘mutants’. Chris also demonstrates how to write better tests before using mutation testing frameworks, such as Stryker. Watch Chris’ talk and find out why mutation testing should be part of your testing arsenal.
It really is different with data.
Martin Fowler interviews Em Grasmeder, the ‘Data Witch’ of ThoughtWorks about data science and the role of data science and data engineering in software development. Em and Martin discuss the similarities and differences between regular software development and the new world of data; looking at testing frameworks, models and how data can provide value. Martin’s key message is we need to “break down the silos” between data analytics and software engineering. Watch Em and Martin’s Q&A here.