Teach Yourself Programming in Ten Years by Peter Norvig (AI, Stanford, Google)
On his classes Peter suggests importance of being aware and estimating order of magnitude
of processing time of algorithms, and adjusting programs accordingly, avoid "premature optimizations". For that a higher abstraction language like Python is proffered.
"Approximate timing for various operations on a typical PC:"
execute typical instruction | 1/1,000,000,000 sec = 1 nanosec |
fetch from L1 cache memory | 0.5 nanosec |
branch misprediction | 5 nanosec |
fetch from L2 cache memory | 7 nanosec |
Mutex lock/unlock | 25 nanosec |
fetch from main memory | 100 nanosec |
send 2K bytes over 1Gbps network | 20,000 nanosec |
read 1MB sequentially from memory | 250,000 nanosec |
fetch from new disk location (seek) | 8,000,000 nanosec |
read 1MB sequentially from disk | 20,000,000 nanosec |
send packet US to Europe and back | 150 milliseconds = 150,000,000 nanosec |
Coding Blocks podcast Episode 45 – Caching Overview and Hardware
"In more relatable terms.
- 1 second for L1 Cache (actual is 0.5 ns)
- 5 days for memory
- 11 days for data center
- 23 days for SSD
- 15 months for HD
- Almost 10 years for internet! (actual is 150 ms)"
Visual Animation:
Relative Memory Access speeds (Lesson 3)