Narrow Review Scope
=
Significant Input

Code reviews covering all relevant aspects ranging from coding style to security are important, but they have limits.

For analyzing code in the context of a specific bug, a performance goal or in terms of concurrency correctness, the scope must be narrowed down to maximize the potential for significant input. That's what spot code reviews are all about.

Debug Reviews

In the course of a software project, issues arise that are more difficult to address than others. A defect that cannot be reproduced easily, a fatal error with an unknown origin, or a design bug with no obvious solution that results in incorrect behavior. More often than not, finding the root cause requires diving into lower software layers to understand involved implications - layers that are not normally dealt with on a regular basis, Once identified, a solution can still prove to be ways down the road due to implications changes could have on other areas or constrains that rule out addressing the root cause altogether.

Objectives
  • Identify the root cause or recommend a path to the root cause based on hands-on debugging, experimentation, and code review
  • Recommend changes to address the root cause in the most efficient way possible

Optimization Reviews

Performance characteristics of code can have a major impact on the viability and competitiveness of software and can be a make-or-break decision for potential customers. As users and data sets grow, implementations need to be optimized to meet the demands. No code base performs optimally with all kinds of input, no code base scales linearly from the start. Premature optimization is rarely a viable strategy, so the performance characteristics need to evolve by definition. Optimizing existing code is a balance act that should aim to generate the highest possible speedup with the least amount of effort and minimal risk.

Objectives
  • Identify bottlenecks inherent in the design, algorithms, resource usage, and implementation
  • Recommend changes to optimize performance and scalability (ordered by speedup probability)

Concurrency Reviews

With an abundance of multiprocessing resources in today's hardware, maximizing parallelism for processing a single request in the most abstract sense is paramount. Similarly, multiple concurrently executed requests must be executable with minimal delay caused by concurrency control to protect shared resources. These demands can lead to complex code that is hard to maintain and the correctness of which is challenging to verify, let alone prove. To make things worse, incorrect algorithms or implementations might run correctly for years on end and suddenly break due to changing patterns.

Objectives
  • Verify the correctness of algorithms and their implementation
  • Recommend changes to solve incorrect concurrency control and best practices for testing correctness

Why an external review is cost-effective

Your team consists of experts in your domain. Addressing an unexpected issue of unknown origin, optimizing code or ensuring concurrency correctness is not necessarily something they have to face on a regular basis. Seeking support for those kinds of tasks can lead to higher efficiency in two ways: we bring in our experience to support the completion of those tasks, your team can focus on productive work in your domain. Let them do what they can do best. Let us deal with the rest.

Tunnel vision is a concern whenever a team works on a task for an extended period of time. Opinions and decisions made along the way might not be questioned anymore, leading to a lack of progress. A fresh set of eyes with the right skill set could be all that's needed to turn things around and bring you back on track.

Maybe you're merely interested in a second opinion. If, however, the task is already known to be out of the comfort zone of your team, not seeking external support could block your productive work for weeks and months. In contrast, involving us only takes a briefing and a moderate low-risk investment that is always based on timeboxes and never open-ended. Meanwhile, productive work can continue.

How it works

The process is simple. We start with a briefing to establish a common understanding of the objective, to determine the environment and context affecting the scenario, and to estimate the probability of substantial input.

We prefer to work on cases that involve the following technologies because low-level issues are what we're passionate about:

aarch64 assembly c++ c java linux posix unix win32 windows x86_64

In case you're still unsure about the kind of scenarios we're working on, read an actual review report or contact us

If both of us are ready to commit, we agree on a timebox for the review effort (4 hours initially). If desired, a non-disclosure agreement (NDA) can be used to protect intellectual property and internal data. Access to the scenario with the least possible privileges must be provided.

The actual spot code review is done by processing and reviewing the provided input and code and building/testing modified versions (if required). A feedback loop is used to communicate progress, preliminary suggestions and to request further information.

The code review leads to a detailed report that outlines the scenario, analysis, and detailed recommendations to meet the initially presented objectives.

As part of a report review, we jointly decide if we're done or another timebox is required.

Timeboxing = Low Risk

Due to the nature of the review process, we don't work on an open-ended hourly basis. Estimating the effort required for debugging and code reviews concerning optimization and concurrency is difficult because every scenario is different.

Therefore, we limit the hours of work by means of a timebox and add a timebox if more work is required. At any given time, you only have to commit to a manageable financial investment. We start off with a 4-hour fixed-fee timebox and go from there.

Review Reports

Reports cover all aspects of the spot code review: briefing, timeboxes, analysis, action items, and recommendations. The following debug review report illustrates what to expect.

Recent Articles

How to debug a segmentation fault without a core dump

April 9, 2021

In the past, I had to deal with this kind of restriction on several occasions. A segmentation fault or, more generally, abnormal process termination had to be investigated with the caveat that a core dump was not available.

What's the difference between T, volatile T, and std::atomic<T>?

March 22, 2021

There seems to be a lot of confusion about the significance of using std::atomic<T> over a much more straightforward volatile T or T. In the context of concurrency, being aware of the difference between the three cases is one fundamental cornerstone in ensuring correctness.

Does the JVM eliminate allocations of temporary objects?

March 15, 2021

A Stack Overflow user was wondering if the JVM can eliminate the allocation of a temporary object by replacing it with an implicit static instance. Is the JVM “smart enough” to do so?

How to access glibc heap metadata

February 18, 2021

I had that exact requirement recently. I needed to restore the glibc heap from a core dump based on a new process. After restoring all original mappings, all it took was patching main_arena.

Does the JVM return memory to the OS?

January 24, 2021

Resource efficiency is a major concern when it comes to optimizing the performance and scalability of an application. In the Java world, one aspect that usually dominates resource concerns is memory usage - more specifically, the size of the Java heap.

How to compare core dumps for simple time travel debugging

January 6, 2021

How can the difference between two Linux core dumps be identified and why would this even come up? This is going to be lengthy, but will hopefully give you your answer to both of those questions.

Get in Touch