Tuesday, December 17, 2024

Summary of wavelets

Summary of wavelets


https://en.wikipedia.org/wiki/Wavelet_transform

https://en.wikipedia.org/wiki/Discrete_wavelet_transform


Read the 2nd URL link... "One level of the transform" (i.e. a single level of the filterbank)

"Due to the decomposition process the input signal must be a multiple of 2^n where n is the number of levels."  


Then read the Haar Wavelet java source...

```

public static int[] discreteHaarWaveletTransform(int[] input) {
    // This function assumes that input.length=2^n, n>1
    int[] output = new int[input.length];

    for (int length = input.length / 2; ; length = length / 2) {
        // length is the current length of the working area of the output array.
        // length starts at half of the array size and every iteration is halved until it is 1.
        for (int i = 0; i < length; ++i) {
            int sum = input[i * 2] + input[i * 2 + 1];
            int difference = input[i * 2] - input[i * 2 + 1];
            output[i] = sum;
            output[length + i] = difference;
        }
        if (length == 1) {
            return output;
        }

        //Swap arrays to do next iteration
        System.arraycopy(output, 0, input, 0, length);
    }
}

```

You'll realise, that the DWT requires "N" levels (filterbanks) ... where the input (number of samples) is 2^N...

NB: The inner loop that calculates the sum and difference is particular for the "Haar Wavelet".

Depending on the mother wavelet chosen, this may be different equation/formula/algorithm.

That's all!

Sunday, December 15, 2024

What it's like being a CEO of a (or two) startup companies that you're self-funding?

What it's like being a CEO of a (or two) startup companies that you're self-funding?


- You work your day job 9-5, 5 days a week, sometimes more, so you get money to fund your startup, and pay living costs.


- The only time you have to work on your startup is the weekends, and you work sometimes without sleep for 30 hours straight, trying to make a product, or to come up with an idea for a product. You do this cause you're trying to maximise productivity on the weekends.


- Then you crash (go to bed) until Monday morning.


- So basically, you're working a 60-70 hour week, with little time for other things. Cooking your own meal is a luxury (and also saves living costs!).


- You want to use your spare cash to hire someone to help you, but the costs are too high in the country you live in, so you're thinking of out-sourcing to other countries, but then you don't know the laws regarding employment in those countries... (e.g. tax laws, social security laws, insurance, etc)


- So you go at it alone.


- And it's ridiculously hard.


- Sometimes you need to go out on dates with your partner, or take care of family.


- You actually work more than 60-70 hours a week, cause your day job asks you to conduct meetings after hours cause it's a multi-national with different timezones.


- You can barely afford living costs after expenses, so your startup has to run on vapour.


- Which is fine, if it's a software startup -- working on software literally requires zero dollars up front, just time.


- You probably should book the time spent on your software startup as cost.


- You're worried that you might never get revenue for your startup(s).


- You're after funding, but not really, cause you're afraid of VCs leaving you out dry, or kicking you out from your own company. Which is why you offer them non-voting shares, which they'll never accept. LOL.


- To be honest, 60-70 hours a week isn't all that hard. In my youth, I did 90 hour weeks at startups, and then took week-long breaks after. But at age 40+, it takes a toll on you.


- Which is why some say, startups don't hire old guys. They're too easily tired.


- More specifically though, my startup (Drudget) already has products. I just need to work on the sales/marketing aspect. My other startup (Zzimps) needs a killer video game to demo the DLC plugin code -- which either I would have to make or get some indie/3rd party developer to make. I just keep imagining one day we'll see Super Mario in Counter-strike.




Friday, December 13, 2024

Why I still use the abbreviation C/C++

 Why I still use the abbreviation C/C++


- I was born in the 80s, coded C and C++ in the 90s when I was a teenager.

- I'm old school.

- Dr Dobbs Journal and C/C++ Users Journal used to be publications I'd read.

- C++ is a superset of C, and relies on C idioms such as pointers, arrays, functions, etc. Whilst it might be arguable that it's not a strict superset (some things in C will create an error in C++ compilers), it's often thought of a superset because it began as C with classes.

- C++ still has the inline assembler capability of C. This is the most powerful asset that C/C++ has together -- the ability to hand-optimise inline assembly. Other languages you find nowadays don't have that capability.

- I'm sure I'll anger some people with religious ferver, but they're probably not old school enough to appreciate C++'s heritage and are jumping on the bandwagon of whatever seems cool to call C++ these days.

Tuesday, December 10, 2024

Why C++'s string_view is slower.

Why C++'s string_view is slower.


C++17's string_view is meant to replace parameters where you used to have "const std::string &str".


However, it's actually slower than "const std::string &str" because it requires a creation of an object on the stack (in assembler), which is the string_view object.


Even when it's optimised with -O3 (gcc flag), it's still slower.



test1.cpp:


```


#include <string>


int func(const std::string &str)

{

for (int i = 0; i < str.size(); i++) {

if (str[i] == ' ')

return i;

}


return 0;

}


int main()

{

std::string blah = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";


int count = 0;


for (int i = 0; i < 10000000; i++) {

count += func(blah);

}


printf("%d\n", count);

}



```



test2.cpp:


```


cat test2.cpp

#include <string>


int func(std::string_view str)

{

for (int i = 0; i < str.size(); i++) {

if (str[i] == ' ')

return i;

}


return 0;

}


int main()

{

std::string blah = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";


int count = 0;



for (int i = 0; i < 10000000; i++) {

count += func(blah);

}


printf("%d\n", count);


}


```



func() is a dummy, constant-time, operation, so that it doesn't get optimised out by the compiler as it returns a value that's needed by the printf("%d\n", count); call later on.



test1.cpp's time taken (the first run is non-optimised):


```


$ g++ -o test test.cpp

$ time ./test

0


real 0m1.080s

user 0m1.075s

sys 0m0.005s



$ g++ -o test test.cpp -O3

$ time ./test

0


real 0m0.221s

user 0m0.220s

sys 0m0.001s



```



test2.cpp's time taken (the first run is non-optimised):


```


$ g++ -o test2 test2.cpp

$ time ./test2

0


real 0m1.140s

user 0m1.135s

sys 0m0.005s

$ g++ -o test2 test2.cpp -O3

$ time ./test2

0


real 0m0.227s

user 0m0.223s

sys 0m0.004s


```



When comparing the two assembly outputs (non-optimised):


test1.cpp:


```


0x00000000000014d2 <+94>: lea -0x40(%rbp),%rax

0x00000000000014d6 <+98>: mov %rax,%rdi

0x00000000000014d9 <+101>: call 0x1409 <_Z4funcRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE>

0x00000000000014de <+106>: add %eax,-0x48(%rbp)

0x00000000000014e1 <+109>: addl $0x1,-0x44(%rbp)

0x00000000000014e5 <+113>: cmpl $0x98967f,-0x44(%rbp)

0x00000000000014ec <+120>: jle 0x14d2 <main()+94>

0x00000000000014ee <+122>: mov -0x48(%rbp),%eax



```


test2.cpp:


```


0x00000000000014af <+78>: movl $0x0,-0x48(%rbp)

0x00000000000014b6 <+85>: movl $0x0,-0x44(%rbp)

0x00000000000014bd <+92>: jmp 0x14e6 <main()+133>

0x00000000000014bf <+94>: lea -0x40(%rbp),%rax

0x00000000000014c3 <+98>: mov %rax,%rdi

0x00000000000014c6 <+101>: call 0x1250 <_ZNKSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEcvSt17basic_string_viewIcS2_EEv@plt>

0x00000000000014cb <+106>: mov %rax,%rcx

0x00000000000014ce <+109>: mov %rdx,%rbx

0x00000000000014d1 <+112>: mov %rdx,%rax

0x00000000000014d4 <+115>: mov %rcx,%rdi

0x00000000000014d7 <+118>: mov %rax,%rsi

0x00000000000014da <+121>: call 0x13e9 <_Z4funcSt17basic_string_viewIcSt11char_traitsIcEE>

0x00000000000014df <+126>: add %eax,-0x48(%rbp)

0x00000000000014e2 <+129>: addl $0x1,-0x44(%rbp)

0x00000000000014e6 <+133>: cmpl $0x98967f,-0x44(%rbp)



```


In test2.cpp it calls the string_view constructor, and then calls func(std::string_view); whereas in test1.cpp it just calls func(const std::string &str).


In the optimised versions though, it seems abit different. It seems to have hardcoded the string comparison function "func" into main(), with intel optimised instruction set. "func" itself is never called.


test2 optimised (-O3) assembly:


```


Dump of assembler code for function main():

Address range 0x1100 to 0x1237:

0x0000000000001100 <+0>: endbr64

0x0000000000001104 <+4>: push %rbp

0x0000000000001105 <+5>: xor %edx,%edx

0x0000000000001107 <+7>: push %rbx

0x0000000000001108 <+8>: sub $0x48,%rsp

0x000000000000110c <+12>: mov %fs:0x28,%rax

0x0000000000001115 <+21>: mov %rax,0x38(%rsp)

0x000000000000111a <+26>: xor %eax,%eax

0x000000000000111c <+28>: lea 0x10(%rsp),%rdi

0x0000000000001121 <+33>: lea 0x8(%rsp),%rsi

0x0000000000001126 <+38>: movq $0x3e,0x8(%rsp)

0x000000000000112f <+47>: lea 0x20(%rsp),%rbx

0x0000000000001134 <+52>: mov %rbx,0x10(%rsp)

0x0000000000001139 <+57>: call 0x10d0 <_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE9_M_createERmm@plt>

0x000000000000113e <+62>: mov 0x8(%rsp),%rdx

0x0000000000001143 <+67>: movdqa 0xec5(%rip),%xmm0 # 0x2010

0x000000000000114b <+75>: xor %r8d,%r8d

0x000000000000114e <+78>: movabs $0x333231307a797877,%rdi

0x0000000000001158 <+88>: mov %rax,0x10(%rsp)

0x000000000000115d <+93>: mov %rdx,0x20(%rsp)

0x0000000000001162 <+98>: mov $0x3938,%edx

0x0000000000001167 <+103>: movups %xmm0,(%rax)

0x000000000000116a <+106>: movdqa 0xeae(%rip),%xmm0 # 0x2020

0x0000000000001172 <+114>: mov %rdi,0x30(%rax)

0x0000000000001176 <+118>: mov $0x989680,%edi

0x000000000000117b <+123>: movups %xmm0,0x10(%rax)

0x000000000000117f <+127>: movdqa 0xea9(%rip),%xmm0 # 0x2030

0x0000000000001187 <+135>: mov %dx,0x3c(%rax)

0x000000000000118b <+139>: mov 0x10(%rsp),%rdx

0x0000000000001190 <+144>: movl $0x37363534,0x38(%rax)

0x0000000000001197 <+151>: movups %xmm0,0x20(%rax)

0x000000000000119b <+155>: mov 0x8(%rsp),%rax

0x00000000000011a0 <+160>: mov %rax,0x18(%rsp)

0x00000000000011a5 <+165>: movb $0x0,(%rdx,%rax,1)

0x00000000000011a9 <+169>: mov 0x18(%rsp),%rdx

0x00000000000011ae <+174>: mov 0x10(%rsp),%rsi

0x00000000000011b3 <+179>: nopl 0x0(%rax,%rax,1)

0x00000000000011b8 <+184>: test %rdx,%rdx

0x00000000000011bb <+187>: je 0x11da <main()+218>

0x00000000000011bd <+189>: xor %eax,%eax

0x00000000000011bf <+191>: jmp 0x11d1 <main()+209>

0x00000000000011c1 <+193>: nopl 0x0(%rax)

0x00000000000011c8 <+200>: add $0x1,%rax

0x00000000000011cc <+204>: cmp %rdx,%rax

0x00000000000011cf <+207>: je 0x11da <main()+218>

0x00000000000011d1 <+209>: cmpb $0x20,(%rsi,%rax,1)

0x00000000000011d5 <+213>: jne 0x11c8 <main()+200>

0x00000000000011d7 <+215>: add %eax,%r8d

0x00000000000011da <+218>: sub $0x1,%edi

0x00000000000011dd <+221>: jne 0x11b8 <main()+184>

0x00000000000011df <+223>: mov %r8d,%edx

0x00000000000011e2 <+226>: lea 0xe1b(%rip),%rsi # 0x2004

0x00000000000011e9 <+233>: mov $0x1,%edi

0x00000000000011ee <+238>: xor %eax,%eax

0x00000000000011f0 <+240>: call 0x1090 <__printf_chk@plt>

0x00000000000011f5 <+245>: mov 0x10(%rsp),%rdi

--Type <RET> for more, q to quit, c to continue without paging--

0x00000000000011fa <+250>: cmp %rbx,%rdi

0x00000000000011fd <+253>: je 0x120d <main()+269>

0x00000000000011ff <+255>: mov 0x20(%rsp),%rax

0x0000000000001204 <+260>: lea 0x1(%rax),%rsi

0x0000000000001208 <+264>: call 0x10a0 <_ZdlPvm@plt>

0x000000000000120d <+269>: mov 0x38(%rsp),%rax

0x0000000000001212 <+274>: sub %fs:0x28,%rax

0x000000000000121b <+283>: jne 0x1226 <main()+294>

0x000000000000121d <+285>: add $0x48,%rsp

0x0000000000001221 <+289>: xor %eax,%eax

0x0000000000001223 <+291>: pop %rbx

0x0000000000001224 <+292>: pop %rbp

0x0000000000001225 <+293>: ret

0x0000000000001226 <+294>: call 0x10b0 <__stack_chk_fail@plt>

0x000000000000122b <+299>: endbr64

0x000000000000122f <+303>: mov %rax,%rbp

0x0000000000001232 <+306>: jmp 0x10e0 <main.cold>

Address range 0x10e0 to 0x1100:

0x00000000000010e0 <-32>: mov 0x10(%rsp),%rdi

0x00000000000010e5 <-27>: cmp %rbx,%rdi

0x00000000000010e8 <-24>: je 0x10f8 <main()-8>

0x00000000000010ea <-22>: mov 0x20(%rsp),%rax

0x00000000000010ef <-17>: lea 0x1(%rax),%rsi

0x00000000000010f3 <-13>: call 0x10a0 <_ZdlPvm@plt>

0x00000000000010f8 <-8>: mov %rbp,%rdi

0x00000000000010fb <-5>: call 0x10c0 <_Unwind_Resume@plt>

End of assembler dump.

(gdb) disas func

Dump of assembler code for function _Z4funcSt17basic_string_viewIcSt11char_traitsIcEE:

0x0000000000001330 <+0>: endbr64

0x0000000000001334 <+4>: test %rdi,%rdi

0x0000000000001337 <+7>: je 0x1360 <_Z4funcSt17basic_string_viewIcSt11char_traitsIcEE+48>

0x0000000000001339 <+9>: xor %eax,%eax

0x000000000000133b <+11>: jmp 0x1349 <_Z4funcSt17basic_string_viewIcSt11char_traitsIcEE+25>

0x000000000000133d <+13>: nopl (%rax)

0x0000000000001340 <+16>: add $0x1,%rax

0x0000000000001344 <+20>: cmp %rax,%rdi

0x0000000000001347 <+23>: je 0x1360 <_Z4funcSt17basic_string_viewIcSt11char_traitsIcEE+48>

0x0000000000001349 <+25>: cmpb $0x20,(%rsi,%rax,1)

0x000000000000134d <+29>: mov %eax,%r8d

0x0000000000001350 <+32>: jne 0x1340 <_Z4funcSt17basic_string_viewIcSt11char_traitsIcEE+16>

0x0000000000001352 <+34>: mov %r8d,%eax

0x0000000000001355 <+37>: ret

0x0000000000001356 <+38>: cs nopw 0x0(%rax,%rax,1)

0x0000000000001360 <+48>: xor %r8d,%r8d

0x0000000000001363 <+51>: mov %r8d,%eax

0x0000000000001366 <+54>: ret



```



So we can't compare the optimised versions. But even so, the optimised versions of test2 aren't actually faster. That's the weird thing. No string_view is being generated (constructor for string_view isn't being called), but the hardcoded comparison seems to be slower?



Conclusion


Don't use string_view, if you care about speed, as it's not actually faster.


There's just some stuff that never gets benchmarked, but cargo-cult coders will use relentlessly.


I have no idea where C++ standards are headed, but it's not necessarily going in the right direction, in my opinion.



Monday, December 9, 2024

Things to learn (these should be on your TODO list)

Things to learn (these should be on your TODO list)


For engineers (DFT, FFT, Kalman filter, Neural Networks, PID controllers)


These are topics you can learn from ChatGPT.


I used to have example programs on my github with those topics.


It's very important to learn them all, as they're very important in the field of engineering.


DFT/FFT can be used for sound or light (electromagnetic waves).


Kalman filter can be used for sensor fusion projects.


Neural Networks are important, as they're the basis of AI.


You'll find that DFT/FFT/NN use dot products, whilst NN also uses gradient descent to train the NN.


Kalman filter is a very simple predictor vs measurer, where the weight of a predictor is considered against the weight of a measurer, and whichever ends up being more accurate ends up being weighted more heavily.


PID controllers are a form of control loop which utilises feedback to correct the errors from the loop, and allows adjustment of the control, when the target is not stationary.


Computer scientists (prime numbers, entropy, cryptography, buffer overflows, hacking, huffman encoding/compression, OS kernels)


Software engineers (device drivers, embedded systems, GUI systems/X11/Qt/Windows)


Web developers (frontend, backend)

Wednesday, October 30, 2024

Using gdb

Using gdb


I figured I'll write a mini-tutorial on how to use gdb, because there's not that many places where they teach you how to use gdb effectively.


Let's say you have a program and it crashes, what do you do?


Example code:


```


void func() { char *p = 0; *p = 0x69; }


int main() { func(); }



```



Next:


```


gdb a.out


```



Followed by the `run` command:


```


(gdb) run

Starting program: /home/d/a.out

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".


Program received signal SIGSEGV, Segmentation fault.

0x000055555555513d in func () at test.c:1

1 void func() { char *p = 0; *p = 0x69; }

(gdb)



```



You can see where it crashes, but you'd like a stacktrace...


```


gdb) bt

#0 0x000055555555513d in func () at test.c:1

#1 0x0000555555555155 in main () at test.c:3

(gdb)



```


(NB: `bt` stands for backtrace)


You can go up a frame and check local variables:


```


(gdb) up

#1 0x0000555555555155 in main () at test.c:3

3 int main() { func(); }

(gdb) info local

No locals.



```



Or down a frame and check local variables:


```


(gdb) down

#0 0x000055555555513d in func () at test.c:1

1 void func() { char *p = 0; *p = 0x69; }

(gdb) info local

p = 0x0



```




You can continue after the segfault:


```

(gdb) cont

Continuing.


Program terminated with signal SIGSEGV, Segmentation fault.

The program no longer exists.

(gdb)



```



Now we can re-run it, but before we do, we can set a breakpoint:


```

(gdb) break func

Breakpoint 1 at 0x555555555131: file test.c, line 1.

(gdb)



```



Now we run it again, it will stop at the breakpoint:


```


(gdb) run

Starting program: /home/d/a.out

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".


Breakpoint 1, func () at test.c:1

1 void func() { char *p = 0; *p = 0x69; }

(gdb)


```


We can single step through it:


```


(gdb) step


Program received signal SIGSEGV, Segmentation fault.

0x000055555555513d in func () at test.c:1

1 void func() { char *p = 0; *p = 0x69; }

(gdb)


```



We can also try the command `next`, which is similar to `step` but skips over subroutines.


We can get help from gdb at any time using `help`.


Another useful `info` command is `info reg`, which shows CPU registers.


Also useful is `disas` command, which disassembles the code.

 

Tuesday, October 29, 2024

More Captain Drone Killer ideas...

More Captain Drone Killer ideas...

 

Disclaimer: These are just ideas, use them at your own risk.


Some interesting DIY wide-band jammers


A wide-band jammer that only costs $120 to make:


https://eee.yasar.edu.tr/wp-content/uploads/2015/06/cem_anil_kivanc_jammer_poster_yasar.pdf


Another wide-band jammer, this one uses a spark to generate noise. It's very interesting, but I would be scared of all the x-rays being generated from the spark:


https://www.instructables.com/Simple-broadband-jammer/


And of course, I've seen alot of wide-band jammers for sale. I can't recommend any, since I haven't personally tried them out.


These wide-band jammers would obviously be useful for knocking out drones. Unless you want to use a Microwave Oven Cannon(TM).


Microwave Oven Cannon(TM)


This one is quite simple. If you dismantle your microwave oven and take out the magnetron, waveguide and the electronics, you could turn it into a Microwave Oven Cannon(TM).


I wouldn't recommend it, unless you re-design the waveguide so that it is more of a parabolic dish, so that it projects all the waves in a parallel line.


Speaking of which... drone jamming isn't all that difficult?

It seems drone jamming and knocking out drones isn't all that difficult.


It might be that drone detection is more difficult, because the cellular mobile frequency range is being used by mobile phones as well as drones (drones with SIM cards).


Drone detection, various ways...


  • Radar
    • Hard, because drones are so small you could mistaken them for birds
  •  Microphone
    • Easier than radar, cause they make a buzzing noise
  • Camera
    • Easiest, if you have a fish-eye wide-angle lens with computer vision algorithms
  • Radio Frequency (RF) usage, e.g. cellular frequencies, normal RC (radio controller) frequencies, WiFi frequencies
    • Hard, because it's hard to distinguish normal cellular and WiFi usage from drone usage




Summary of wavelets

Summary of wavelets https://en.wikipedia.org/wiki/Wavelet_transform https://en.wikipedia.org/wiki/Discrete_wavelet_transform Read the 2nd UR...