During 1-on-1, we often bounce the big question: “what get you uncomfortably excited?” For me, I’m lucky to have jobs in SW engineering & a career on scaling platforms. But most importantly a calling: making an industry revolution by increasing the SW development productivity significantly. Luckily, I happen to work on car infotainment systems, where the industry productivity gap is at an inflection point. So allow me to scratch the surface of a cathedral 😉.
Generative Super Vision SW Architecture Oracle
What may augment SW developers to see the up-to-date architecture, grasp deeper knowledge, and foresee the evolution? What if any developer, new or external contributors can collaborate on a huge codebase productively. So maybe the quality of changes will be more predictable at scale.
The most common practice is to pre-architect the high-level design. But such an architecture blueprint out-dated as soon as anyone starts coding. Typically a capable team will update it in parallel. However, it’s not only taking more developer hours but also creating 2 parallel universes: the source code & the architecture documentation. Even worse, it also adds a burden to the code review process, a common mitigation practice to reduce chances of honest technical debts. The difficulty scales up quickly as the complexity of the change & the codebase grows.
A better approach is to force a single source of truth by embedding the Architecture Schematic Metadata(ASM) in the source code & build recipes. So developers can easily evolve the SW while keeping the architecture up-to-date. Furthermore, anyone can see how a change alters the design. Spotting hidden technical debts will be easier. Tools can extract ASM to present higher levels of information in a view tailored to any need on-demand automatically. Such technic has already significantly improved the quality & productivity of API documentation, e.g. Javadoc. And if it’s built right, it can supercharge the capability & productivity for SW teams on the design & implementation quality. So there could be a positive continuous improvement loop to sustain a huge codebase longer.
Today, most of us use a calculator to handle calculations. As lazybones like me 😅, I even spreadsheets for simple ones. Because our minds should be spent on more valuable tasks. So why don’t we let tools do the job, such as grouping the SW artifacts and extracting their connections? It can be even more useful when the tool breaks down the silos from the source to the build transformation & even for the runtime interactions. Sure some brilliant developers can “calculate” all those in their minds. But why do you want them to spend time on that? Won’t it be better to let machines connect the dots? So that, developers can focus on more valuable tasks & scale easier.
A version control tool likes a Time Machine, that SW developers should not live without. It provides a few key inspirations for the Architecture Schematic Metadata to enable anyone to track down how the architecture evolves, and foresee what could be a better path forward. For examples:
- ASM can be comparable. Just like you can diff two versions of source code.
- ASM can be human-readable too, such as in a structured & descriptive text form. So developers can understand them easily just like code.
- Of cause, tools can render ASM to interactive graphs depending on the jobs to be done on demand. And machines can apply computation as needed.
Finally the million-dollar question: what’s this in for me? tl;dr what if the machine can tell you if a change may cause which regression. To be clear, this is not to replace the defender strategies, such as pre-submit, continuous testing. Those capabilities are valuable for dev teams to build first anyway. So you can take the team to the next level then. Which will proactively predict what might break early by augmenting developers & reviewers with higher-level & more comprehensive information. Even better as you use it more, you train it to predict the regressions better. So you can move faster and scale better by strategically pick the areas of concern for changes to be tested more without always run a million tests for every signal change, and still with the same level or better confidence. Furthermore in the process, you are building mechanisms to evolute the architecture better by making it easy for everyone to see it in a view they need.
The opinions stated here are my own, not those of my company. They are mostly extrapolations from public information. I don’t have insider knowledge of those companies, nor an EV expert.