Why AGILE doesn't suck.
Previously I wrote why AGILE sucks. Now I write a piece on why AGILE doesn't suck.
Godel
First off I start with a theorem based off Godel's incompleteness theorem, which basically says:
Any theory T has a paradox P, and can't be resolved unless a new theory T' is made, which solves the paradox P but will have its own paradox P'.
That is because of Godel's incompleteness theorem. Either the theory is inconsistent, or cannot prove everything; which idealises itself as a paradox.
Why AGILE doesn't suck.
Imagine a computer program is a theory T. And that it should match the business case. However, there's a paradox P, which the business case might need to meet and solve, and thus the computer program needs to change to T' (knowing full well there's another P' that could be a problem.)
So AGILE capitalises on this by making changes favourable -- being flexible and not being fixed in a contract allows you to manipulate theories: T -> T' -> T'' -> T''' and so on; with corresponding paradox "Ps" i.e. P -> P' -> P'' -> P'''.
However, this is where AGILE still sucks.
Why AGILE still sucks.
If you manipulate theories, you will have a bunch of paradoxes which need to be solved by each theory. If it's unsolved, that's technical debt. So you have to refactor after each addition of a new theory, to ensure you eliminate the paradoxes and only have one SINGLE final paradox.
But we are human.
So I guess that's the deciding factor. We are human, and our business cases are always expanding in scope. Translated, this means our theories will always expand into some new theory.
So this is why AGILE wins in the end. If done properly, with refactoring and eliminating paradoxes, it will have the better outcome.
However, when I run a business, and I sure as hell won't develop more code than I need to unless you pay me a subscription fee...
Because creating code isn't free.
But, then, that's where the generative AI is supposed to come in and tell me that you should code for free because otherwise you'll be replaced with AI and its "Large Language Models".
But they're flawed. We know, because they're unable to solve even the simplest paradoxes.
Because paradoxes are hard. And they're dependent on the environment -- e.g. chicken or egg paradox, which came first? We can't tell just by looking at that sentence, but we know, given biology, that a bird that looked like a chicken laid an egg which had genetic mutations which gave rise to a chicken. (Note: It doesn't resolve the paradox of which came first, the bird or the egg; as you can see, it just shifts the paradox into a larger scope paradox.)
Generative AI is not even close to solving paradoxes like that, because it doesn't experience the human world. The real world. The reality.
Nature defines how paradoxes are solved.
I was working on Riemann's hypothesis by the way...
And I realised the reason why the hypothesis exists is because it's trying to solve a paradox.
We're going to need a new theory T' (and not just ZFC set theory) to solve it. However, that's the problem. Riemann's hypothesis tackles the theory of theories and paradoxes, because it's attempting to solve a problem relating to prime numbers, and that's part of the problem, since there's an infinite number of primes, and ZFC says you can't prove infinity (since you can't count to infinity), and that itself is a paradox.
However, attacking RH using modular forms is something interesting. Imagine being able to wrap up all the prime numbers in a modular form. That would allow you to brute force all possibilities and solve RH.
However, there is no pattern to prime numbers, as it appears to be randomly distributed as it gets larger. So ... modular forms is probably not a solution, in fact, it's probably a paradox in modular forms as well.
The paradox P just keeps in escaping into every new theory T'.
There is always going to be one final paradox anyway.
So, hence, AGILE still sucks.
(Edit: I mentioned "modular forms" and I only now realise that was a mistake. I meant finite field, which RH has been solved for, but whether this applies generally is another question.)