

Tests should be written from requirements. Using LLMs to write tests after the code is written (probably also by LLMs) is a huge anti-pattern:
The model looks at what the code is doing and writes tests that pass (or fail because they bungle the setup). What the model does not do, is understand what the code needs to do and write tests that ensure that functionality is present and correct.
Tests are the thing that should get the most human investment because they anchor the project to its real-world requirements. You will have tons more confidence in your vibe coded appslop if you at least thought through the test cases and built those out first. Then, whatever the shortcomings of the AI codebase, if the tests pass you can know it is doing something right.
The US is the largest producer of oil in the world, and there is a very strong lobby for keeping that production up.
The US accounts for about 20% of total oil consumption, 2/3rds of that is transportation. If transportation were to be heavily electrified, it would be a meaningful drop in global oil consumption and cause prices to fall. Falling prices hurt US producers more than many overseas producers, as fracking is an expensive extraction method and most US production is from fracked wells.