diskcache: Your secret Python perf weapon
Episode Deep Dive
Guest Introduction
Vincent Warmerdam joins Michael Kennedy to dive deep into DiskCache. Vincent has an extensive background in data science and machine learning, which is what many in the Python community know him from. He currently works at Marimo (marimo.io), a company building modern Python notebooks that take lessons from Jupyter and apply a fresh, reactive approach. Vincent is also a prolific content creator, maintaining educational resources at Calmcode (calmcode.io) and contributing to open source projects like scikit-lego. His practical experience spans both data science workflows in notebooks and web development, giving him unique insight into how caching benefits different parts of the Python ecosystem.
What to Know If You're New to Python
If you are newer to Python and want to get the most out of this episode analysis, here are some foundational concepts that will help:
- Dictionaries in Python: DiskCache behaves like a Python dictionary with square bracket access (
cache["key"] = value), so understanding how dictionaries work is essential. - Decorators: The episode discusses using
@cache.memoizedecorators to automatically cache function results, similar to the built-infunctools.lru_cache. - Serialization with Pickle: Python's pickle module converts objects to bytes for storage; DiskCache uses this under the hood for complex objects.
- Multi-processing basics: Understanding that web apps often run multiple Python processes helps explain why cross-process caching matters.
Key Points and Takeaways
1. DiskCache: A SQLite-Backed Dictionary That Persists to Disk
DiskCache is a Python library that provides a dictionary-like interface backed by SQLite, allowing you to cache data that survives process restarts. Unlike functools.lru_cache which stores everything in memory and disappears when your Python process ends, DiskCache writes to a file on disk. This means your cached data persists across restarts, deployments, and even Docker container rebuilds. The library handles all the complexity of SQLite transactions, thread safety, and process safety behind a simple API where you just use square bracket notation like a regular dictionary.
2. Thread Safety and Cross-Process Sharing
One of DiskCache's standout features is that it is both thread-safe and process-safe out of the box. This is critical for web applications that typically run multiple worker processes (a "web garden") where each process needs access to the same cached data. Traditional in-memory caches like LRU cache are isolated to a single process, meaning each worker would have to build its own cache independently. With DiskCache, all processes can read from and write to the same SQLite file, and the library handles the locking and concurrency concerns automatically. Michael uses this on Talk Python's website where multiple Docker containers share a common cache volume.
- SQLite's built-in locking mechanisms
- Works across Docker containers with shared volumes
3. Massive Cost Savings: Disk is Cheap, Memory is Expensive
The episode makes a compelling economic argument for disk-based caching. Modern NVMe SSDs are incredibly fast, often approaching memory speeds for read operations, but cost a fraction of what RAM costs on cloud providers. Michael mentioned paying around $5 for 400GB of disk space on his cloud VMs, while the equivalent RAM would cost orders of magnitude more. This flips the traditional "keep it in memory because it is faster" advice on its head, especially for caching scenarios where the alternative is recomputing expensive operations or making network calls to Redis.
- NVMe SSD performance approaches memory for many use cases
- Reduces cloud hosting costs significantly
- No need for separate Redis/Memcached servers
4. LLM and Machine Learning Use Cases
Vincent highlighted DiskCache as essential for anyone working with LLMs or machine learning models. When running benchmarks or experiments, you often need to call expensive LLM APIs or run inference on local models repeatedly. If the same input produces a deterministic (or acceptable) output, caching prevents wasting compute, time, and money on redundant calls. This is especially valuable during development when you might restart notebooks or rerun experiments many times. The @cache.memoize decorator makes this trivially easy to implement on any function.
- Prevents redundant LLM API calls during benchmarks
- Saves money on cloud API costs
- Essential for iterative notebook workflows
5. Web Application Caching Patterns
Michael shared several practical examples from the Talk Python website. He caches rendered Markdown-to-HTML conversions, parsed YouTube video IDs from show notes, and HTTP request results for cache-busting file hashes. Each of these represents a computation that does not need to happen on every request. He maintains separate cache instances for different purposes, making it easy to clear specific caches without affecting others. The pattern of using content hashes as part of cache keys ensures that cached data automatically invalidates when the source content changes.
- Markdown to HTML rendering
- YouTube ID extraction from show notes
- HTTP cache-busting hash computation
- Separate caches for different concerns
6. The Memoize Decorator for Automatic Function Caching
DiskCache provides a @cache.memoize decorator that works similarly to functools.lru_cache but persists to disk. You decorate a function, and DiskCache automatically creates cache keys from the function name and its arguments. The decorator supports expiration times, so you can say "cache this for 5 minutes" for data that should refresh periodically, like a Reddit-style front page. Vincent discovered you can even exclude certain arguments from the cache key calculation, which solved his problem when a progress bar object was causing cache misses in notebook workflows.
- Expiration/TTL support for automatic cache invalidation
- Argument exclusion for objects that should not affect caching
- Works with any picklable Python objects
7. FanoutCache for High-Concurrency Scenarios
For applications with many concurrent writers, DiskCache offers FanoutCache which automatically shards data across multiple SQLite files. Since SQLite allows concurrent readers but writers block other writers, sharding reduces contention by spreading writes across multiple database files. The default is 8 shards, but you can configure this based on your expected number of concurrent writers. This is particularly useful for high-traffic web applications or parallel data processing pipelines.
- Automatic sharding across multiple SQLite files
- Reduces write contention
- Django integration uses FanoutCache by default
8. Built-in Django Integration
DiskCache ships with a Django-compatible cache backend that you can drop into your Django settings file. This replaces the need for Redis or Memcached as your Django cache backend while maintaining full compatibility with Django's caching APIs. You simply configure the backend as diskcache.DjangoCache and specify a location, and Django's existing caching decorators and low-level cache API work seamlessly. This is especially valuable for smaller deployments where running a separate cache server adds unnecessary operational complexity.
- Drop-in replacement for Redis/Memcached in Django
- Full Django cache API compatibility
- grantjenks.com/docs/diskcache/djangocache.html
9. Custom Serialization for Compression and Special Types
While DiskCache uses Python's pickle by default, you can implement custom disk classes to control serialization. The documentation includes an example using JSON with zlib compression, which can achieve 80-90% size reduction for text-heavy data like LLM responses or API results. Vincent experimented with quantized NumPy array storage, trading minimal precision loss for 4x disk space savings. For JSON serialization, the hosts recommended orjson over the standard library for better performance and type support including dates and NumPy arrays.
- github.com/ijl/orjson - Fast JSON library with extended type support
- zlib compression for text-heavy caches
- Custom disk classes for specialized serialization needs
10. Eviction Policies and Cache Size Management
DiskCache includes several eviction policies to manage cache size automatically. The default policy is "least recently stored" (LRS), but you can also use "least recently used" (LRU) or "least frequently used" (LFU). The default size limit is 1GB, which prevents unbounded cache growth but might catch developers off guard if they expect unlimited storage. You can also set expiration times on individual cache entries, which is useful for data that should automatically refresh after a certain period.
- Least Recently Stored (LRS) - default
- Least Recently Used (LRU)
- Least Frequently Used (LFU)
- Configurable size limits and TTL
11. Advanced Data Structures: Deque and Index
Beyond simple key-value caching, DiskCache provides higher-level data structures. The Deque (pronounced "deck") class provides a persistent double-ended queue useful for cross-process communication or simple job queues, potentially replacing Celery for simpler use cases. The Index class provides an ordered dictionary with transactional support, allowing you to retrieve multiple values atomically. These structures enable patterns like work distribution across processes without requiring external message brokers.
- Deque for persistent queues and cross-process communication
- Index for ordered dictionaries with transactions
- Potential replacement for simple Celery use cases
12. Related Tools in the SQLite Ecosystem
The conversation touched on several complementary tools in the SQLite ecosystem. Litestream provides continuous streaming backup of SQLite databases to S3-compatible storage, making SQLite viable for production deployments with proper backup strategies. Plash is a new Python-focused hosting platform from Answer AI (Jeremy Howard's company) that provides persistent SQLite as a first-class database option. These tools reflect a broader trend of reconsidering SQLite for production use cases that previously required PostgreSQL or MySQL.
- litestream.io - Streaming SQLite backup to S3
- plash.io - Python hosting with persistent SQLite
- github.com/benbjohnson/litestream
13. Vincent's Code Archaeology Project
Vincent built a visualization project called "Code Archaeology" that demonstrates DiskCache in a real-world data science context. The project analyzes Git repositories by running git blame across 100 time samples to show how code evolves over time, with sedimentary-style charts showing which lines of code survive versus get replaced. Processing large repositories like Django (550,000 lines) took over two hours, making caching essential for iterative development. The project is open source and welcomes contributions of additional repository analyses.
- koaning.github.io/codearch - Live visualization
- Threading combined with DiskCache for parallel processing
- Real-world example of caching expensive git operations
14. Project Maintenance Status and Longevity
The hosts acknowledged that DiskCache has not had a release since 2023, with the maintainer (Grant Jenks) possibly busy with work at OpenAI. However, both Vincent and Michael emphasized this should not discourage adoption. The library is mature, stable, and built on SQLite which is actively maintained. Vincent stated he would need to see the library "break vividly in front of my face" before considering alternatives. The codebase is open source and could be forked if necessary, but the underlying SQLite dependency makes breaking changes extremely unlikely.
- Last PyPI release: 2023
- Built on actively-maintained SQLite
- Considered stable/"done" rather than abandoned
Interesting Quotes and Stories
"It really behaves like a dictionary, except you persist to disk and under the hood is using SQLite. I think that does not cover everything, but you get quite close if that is the way you think about it." -- Vincent Warmerdam
"Your cloud SSD is sitting there, bored, and it would like a job." -- Michael Kennedy (from episode summary)
"I pay something like $5 for 400 gigs of disk. Do you know how much 400 gigs of RAM will cost on the cloud? There goes the college tuition." -- Michael Kennedy
"I vividly remember when I started college, people were always saying, keep it in memory because it is way faster than disk. But I think we have got to let a lot of that stuff just go." -- Vincent Warmerdam
"This cache needs to break vividly in front of my face for me to consider not using it. Because it does feel like it is done, and in a really good way." -- Vincent Warmerdam
"There are only two hard things in computer science: naming things, cache invalidation, and off by one errors." -- Referenced during discussion
"One thing I learned is that caching is actually hard to get right. It is on par with naming things." -- Vincent Warmerdam
"How do you fix that with a whole bunch of infrastructure? No, with a decorator." -- Vincent Warmerdam on the simplicity of DiskCache
Story: The Progress Bar Bug
Vincent shared a debugging story from building his code archaeology project. He was using the memoize decorator but noticed his cache was never being hit. After investigation, he discovered the problem: one of his function arguments was a Marimo progress bar object. Every time he reran the notebook, a new progress bar instance was created with a different object ID, causing every cache lookup to miss. The solution was DiskCache's ability to exclude specific arguments from the cache key calculation - a feature he was relieved to find already existed in the library.
Key Definitions and Terms
LRU Cache: Least Recently Used cache, a caching strategy that evicts the least recently accessed items first. Python's
functools.lru_cacheimplements this in memory.Memoization: An optimization technique that stores the results of expensive function calls and returns the cached result when the same inputs occur again.
Serialization/Pickle: The process of converting Python objects into a byte stream for storage or transmission. Pickle is Python's built-in serialization format.
Sharding: Distributing data across multiple storage locations (in this case, multiple SQLite files) to reduce contention and improve performance.
TTL (Time To Live): An expiration time set on cached data after which it is automatically considered stale and removed.
ACID Compliance: A set of database properties (Atomicity, Consistency, Isolation, Durability) that guarantee reliable transaction processing. SQLite is ACID-compliant.
Web Garden: A deployment pattern where multiple worker processes handle web requests, typically managed by a WSGI server like Gunicorn or uWSGI.
NVMe SSD: Non-Volatile Memory Express Solid State Drive, a modern storage interface that provides significantly faster read/write speeds than traditional SATA SSDs.
Learning Resources
Here are resources to learn more and go deeper on topics covered in this episode:
LLM Building Blocks for Python: Vincent's course that originally sparked this episode, covering practical LLM techniques including caching strategies for API calls and benchmarks.
Agentic AI Programming for Python: Collaborate with AI like a skilled junior developer. Build production features in hours with Cursor and Claude. Get real results.
Python for Absolute Beginners: If you are new to Python and want to understand dictionaries, decorators, and other fundamentals referenced in this episode.
HTMX + Flask: Modern Python Web Apps: Covers web development patterns where DiskCache caching techniques would be immediately applicable.
Overall Takeaway
DiskCache represents a powerful example of choosing the right tool for the job rather than reaching for the most complex solution. In an era where developers often default to running Redis or Memcached servers for caching, DiskCache offers a compelling alternative that requires no additional infrastructure, leverages the rock-solid reliability of SQLite, and takes advantage of modern fast SSDs that have closed much of the performance gap with RAM. Whether you are building web applications, running LLM experiments, or processing data in notebooks, the pattern is the same: expensive computations should not be repeated unnecessarily.
The library embodies the Unix philosophy of doing one thing well. Its dictionary-like API means there is virtually no learning curve for Python developers, while advanced features like sharding, transactions, and custom serialization are available when needed. Vincent's observation that this is in his "top five favorite Python libraries" and Michael's extensive production use on Talk Python speak to its real-world reliability.
Perhaps most importantly, this episode challenges conventional wisdom about caching architecture. You do not always need a separate cache server. You do not always need to keep everything in memory. Sometimes the simplest solution - a well-designed SQLite file on a fast SSD - is exactly right. As Vincent put it: "Give this cache thing a try. It is just good software."
Links from the show
LLM Building Blocks for Python course: training.talkpython.fm
JSONDisk: grantjenks.com
Git Code Archaeology Charts: koaning.github.io
Talk Python Cache Admin UI: blobs.talkpython.fm
Litestream SQLite streaming: litestream.io
Plash hosting: pla.sh
Watch this episode on YouTube: youtube.com
Episode #534 deep-dive: talkpython.fm/534
Episode transcripts: talkpython.fm
Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong
---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython
Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython
Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy
Episode Transcript
Collapse transcript
00:00
00:03
00:08
00:13
00:17
00:21
00:27 Talk Python To Me, yeah, we ready to roll.
00:29 Upgrading the code, no fear of getting old Async in the air, new frameworks in sight
00:35 Geeky rap on deck, Quart crew It's time to unite We started in Pyramid, cruising old school lanes
00:41 Had that stable base, yeah sir
00:48
00:49
00:54
00:55
00:58
01:01
01:05
01:08
01:09
01:13
01:17
01:22
01:23
01:25
01:26
01:27
01:32
01:34
01:36
01:38
01:39
01:45
01:49
01:51
01:54
01:57
01:59
02:00
02:02
02:05
02:09
02:11
02:14
02:18
02:20
02:21
02:25
02:27
02:29
02:30
02:32
02:35
02:37
02:42
02:42
02:45
02:47
02:48
02:51
02:56
03:01
03:06
03:07
03:13
03:15
03:20
03:22
03:24
03:27
03:35
03:47
03:53
03:55
03:57
03:58
04:03
04:05
04:09
04:12
04:14
04:17
04:21
04:23
04:27
04:29
04:31
04:35
04:37
04:41
04:43
04:47
04:49
04:53
04:57
04:58
04:58
04:59
05:02
05:05
05:08
05:12
05:13
05:14
05:15
05:18
05:21
05:25
05:31
05:34
05:38
05:41
05:43
05:45
05:51
05:56
06:00
06:05
06:06
06:13
06:16
06:21
06:23
06:29
06:33
06:36
06:39
06:43
06:48
06:54
06:59
07:05
07:09
07:14
07:18
07:25
07:31
07:35
07:39
07:41
07:43
07:43
07:44
07:46
07:51
07:57
07:59
08:02
08:06
08:06
08:07
08:10
08:11
08:15
08:17
08:19
08:26
08:28
08:31
08:34
08:35
08:36
08:43
08:44
08:47
08:53
08:56
08:57
09:03
09:06
09:08
09:13
09:17
09:22
09:25
09:27
09:29
09:33
09:35
09:42
09:46
09:48
09:53
09:56
10:03
10:05
10:07
10:09
10:11
10:14
10:18
10:21
10:22
10:26
10:29
10:34
10:36
10:40
10:41
10:48
10:51
10:55
10:57
11:00
11:04
11:06
11:07
11:11
11:14
11:16
11:19
11:22
11:27
11:29
11:30
11:30
11:33
11:36
11:39
11:42
11:45
11:50
11:53
11:58
12:00
12:03
12:04
12:08
12:10
12:14
12:17
12:22
12:25
12:30
12:33
12:34
12:41
12:47
12:52
12:57
13:01
13:06
13:10
13:14
13:19
13:24
13:27
13:31
13:36
13:40
13:43
13:45
13:50
13:55
13:59
14:03
14:07
14:11
14:15
14:15
14:17
14:18
14:19
14:19
14:20
14:22
14:23
14:24
14:24
14:25
14:26
14:26
14:27
14:27
14:29
14:33
14:36
14:40
14:42
14:45
14:47
14:49
14:51
14:54
14:56
14:57
14:58
15:01
15:04
15:08
15:13
15:14
15:17
15:18
15:20
15:23
15:25
15:26
15:30
15:32
15:33
15:35
15:36
15:38
15:40
15:49
15:54
15:58
16:02
16:07
16:10
16:14
16:19
16:20
16:22
16:26
16:31
16:34
16:38
16:40
16:43
16:46
16:50
16:52
16:54
16:56
16:59
17:02
17:06
17:07
17:08
17:09
17:09
17:12
17:15
17:18
17:22
17:23
17:29
17:32
17:36
17:38
17:40
17:43
17:47
17:50
17:52
17:53
17:55
17:58
18:00
18:03
18:06
18:07
18:11
18:13
18:15
18:17
18:21
18:23
18:26
18:30
18:33
18:35
18:38
18:40
18:42
18:43
18:44
18:45
18:48
18:51
18:54
18:58
18:59
19:03
19:08
19:09
19:11
19:12
19:13
19:20
19:23
19:29
19:32
19:35
19:40
19:44
19:44
19:46
19:48
19:53
19:53
19:55
19:57
19:59
20:02
20:05
20:09
20:12
20:15
20:19
20:22
20:27
20:28
20:33
20:34
20:36
20:37
20:39
20:41
20:44
20:45
20:47
20:49
20:52
21:00
21:05
21:10
21:16
21:21
21:24
21:26
21:29
21:33
21:34
21:36
21:39
21:41
21:47
21:50
21:56
21:58
21:59
22:03
22:06
22:07
22:09
22:15
22:17
22:19
22:22
22:25
22:27
22:29
22:30
22:31
22:32
22:34
22:36
22:39
22:43
22:43
22:49
22:50
22:52
22:56
23:00
23:03
23:06
23:06
23:07
23:07
23:08
23:10
23:13
23:17
23:21
23:25
23:28
23:32
23:37
23:39
23:41
23:45
23:48
23:51
23:53
23:54
23:58
24:02
24:07
24:10
24:13
24:13
24:15
24:18
24:18
24:21
24:24
24:30
24:32
24:36
24:41
24:44
24:47
24:49
24:51
24:54
24:59
25:01
25:01
25:02
25:05
25:06
25:11
25:14
25:15
25:18
25:21
25:27
25:29
25:34
25:36
25:39
25:40
25:45
25:49
25:51
25:57
25:57
25:59
26:02
26:05
26:08
26:10
26:14
26:16
26:17
26:20
26:23
26:25
26:28
26:31
26:32
26:35
26:39
26:44
26:48
26:50
26:52
26:58
27:00
27:01
27:02
27:05
27:06
27:08
27:13
27:17
27:21
27:24
27:24
27:27
27:28
27:29
27:30
27:31
27:34
27:36
27:41
27:43
27:43
27:44
27:46
27:48
27:49
27:52
27:52
27:53
27:56
27:57
27:58
28:02
28:05
28:07
28:08
28:12
28:13
28:16
28:19
28:20
28:22
28:27
28:29
28:31
28:32
28:36
28:36
28:37
28:40
28:43
28:47
28:50
28:51
28:53
28:54
28:55
28:57
29:00
29:01
29:06
29:08
29:10
29:11
29:13
29:17
29:17
29:20
29:24
29:25
29:26
29:29
29:32
29:34
29:35
29:38
29:40
29:45
29:47
29:51
29:51
29:58
29:59
30:01
30:02
30:04
30:05
30:11
30:13
30:18
30:22
30:26
30:33
30:39
30:42
30:48
30:53
30:56
31:03
31:09
31:18
31:23
31:26
31:28
31:30
31:34
31:38
31:41
31:45
31:46
31:50
31:54
31:59
32:03
32:07
32:10
32:14
32:18
32:20
32:24
32:26
32:32
32:34
32:34
32:37
32:39
32:41
32:46
32:49
32:52
32:54
32:56
32:59
33:02
33:04
33:06
33:08
33:09
33:11
33:12
33:15
33:18
33:20
33:23
33:24
33:25
33:29
33:32
33:32
33:36
33:40
33:45
33:49
33:52
33:54
33:57
33:59
34:04
34:07
34:08
34:12
34:14
34:19
34:22
34:25
34:26
34:28
34:33
34:36
34:38
34:39
34:44
34:46
34:48
34:48
34:53
34:55
34:55
34:57
35:02
35:03
35:06
35:10
35:13
35:15
35:19
35:22
35:23
35:27
35:28
35:29
35:30
35:31
35:34
35:38
35:42
35:45
35:46
35:49
35:51
35:53
35:54
36:00
36:04
36:05
36:06
36:10
36:15
36:17
36:19
36:21
36:23
36:27
36:28
36:28
36:28
36:31
36:31
36:34
36:39
36:42
36:43
36:44
36:45
36:46
36:47
36:48
36:48
36:49
36:52
36:52
36:53
36:54
36:56
36:58
36:59
37:02
37:07
37:09
37:11
37:14
37:17
37:20
37:25
37:29
37:31
37:32
37:33
37:34
37:37
37:37
37:42
37:47
37:51
37:57
38:03
38:05
38:08
38:09
38:11
38:16
38:19
38:21
38:24
38:27
38:31
38:32
38:35
38:38
38:38
38:39
38:40
38:44
38:45
38:46
38:47
38:48
38:50
38:51
38:53
38:53
38:54
38:57
39:01
39:02
39:03
39:04
39:04
39:06
39:09
39:13
39:18
39:21
39:24
39:26
39:31
39:32
39:34
39:37
39:38
39:40
39:42
39:44
39:48
39:55
39:59
40:02
40:04
40:06
40:07
40:08
40:10
40:11
40:14
40:15
40:19
40:20
40:24
40:27
40:31
40:33
40:33
40:34
40:34
40:35
40:36
40:40
40:44
40:49
40:51
40:52
40:54
40:55
40:57
40:58
40:59
41:01
41:04
41:05
41:06
41:11
41:16
41:17
41:18
41:23
41:25
41:27
41:28
41:30
41:31
41:35
41:38
41:40
41:42
41:44
41:45
41:47
41:50
41:51
41:52
41:57
42:00
42:01
42:04
42:06
42:08
42:11
42:13
42:17
42:18
42:20
42:22
42:26
42:29
42:30
42:35
42:36
42:37
42:38
42:44
42:47
42:48
42:49
42:52
42:52
42:53
42:57
43:00
43:01
43:02
43:06
43:08
43:09
43:11
43:14
43:15
43:17
43:18
43:19
43:24
43:26
43:30
43:31
43:31
43:31
43:35
43:40
43:40
43:41
43:46
43:51
43:53
43:55
43:57
43:58
43:59
44:02
44:04
44:04
44:06
44:07
44:10
44:11
44:15
44:19
44:23
44:25
44:29
44:33
44:35
44:36
44:38
44:39
44:41
44:45
44:47
44:48
44:50
44:51
44:51
44:55
44:58
44:59
45:01
45:04
45:09
45:12
45:15
45:20
45:23
45:26
45:29
45:34
45:36
45:41
45:44
45:49
45:53
45:57
45:59
46:02
46:05
46:10
46:12
46:15
46:17
46:18
46:21
46:22
46:24
46:26
46:27
46:28
46:29
46:33
46:35
46:38
46:41
46:42
46:43
46:44
46:46
46:49
46:51
46:58
46:59
47:02
47:05
47:06
47:09
47:11
47:12
47:13
47:16
47:18
47:22
47:27
47:31
47:36
47:40
47:46
47:49
47:52
47:52
47:54
47:54
47:57
47:58
48:02
48:06
48:08
48:09
48:09
48:12
48:13
48:14
48:15
48:17
48:20
48:20
48:25
48:31
48:33
48:34
48:35
48:37
48:39
48:41
48:43
48:49
48:58
48:59
48:59
49:00
49:01
49:08
49:10
49:11
49:12
49:15
49:18
49:22
49:27
49:29
49:32
49:35
49:41
49:44
49:47
49:51
49:54
50:00
50:02
50:05
50:07
50:07
50:08
50:11
50:15
50:16
50:17
50:18
50:18
50:22
50:25
50:28
50:29
50:30
50:32
50:33
50:38
50:40
50:43
50:46
50:49
50:50
50:52
50:55
50:59
50:59
51:01
51:02
51:06
51:06
51:07
51:13
51:15
51:18
51:20
51:21
51:25
51:26
51:30
51:35
51:36
51:39
51:42
51:46
51:48
51:51
51:53
51:55
52:00
52:01
52:03
52:03
52:04
52:06
52:07
52:09
52:11
52:16
52:18
52:21
52:24
52:26
52:31
52:32
52:38
52:42
52:49
52:53
52:58
53:03
53:07
53:13
53:18
53:23
53:28
53:32
53:36
53:41
53:46
53:49
53:51
53:55
53:58
54:08
54:14
54:19
54:23
54:30
54:32
54:36
54:40
54:43
54:43
54:47
54:49
54:51
54:52
54:53
54:54
54:55
54:57
55:00
55:02
55:10
55:13
55:17
55:20
55:21
55:22
55:23
55:26
55:26
55:29
55:30
55:33
55:33
55:36
55:38
55:40
55:42
55:47
55:48
55:50
55:52
55:53
56:00
56:03
56:03
56:05
56:07
56:10
56:14
56:17
56:19
56:20
56:21
56:23
56:24
56:25
56:29
56:31
56:35
56:37
56:40
56:46
56:48
56:54
56:55
56:57
57:03
57:04
57:09
57:10
57:11
57:13
57:17
57:20
57:23
57:25
57:28
57:32
57:34
57:37
57:42
57:45
57:50
57:55
57:57
57:58
58:01
58:03
58:05
58:06
58:12
58:13
58:14
58:20
58:24
58:29
58:32
58:36
58:36
58:41
58:42
58:43
58:45
58:47
58:50
58:53
58:55
58:55
58:57
59:00
59:01
59:03
59:07
59:11
59:11
59:12
59:16
59:18
59:23
59:24
59:27
59:27
59:28
59:30
59:35
59:38
59:43
59:47
59:49
59:54
59:58
01:00:03
01:00:09
01:00:11
01:00:15
01:00:18
01:00:20
01:00:22
01:00:25
01:00:28
01:00:31
01:00:32
01:00:33
01:00:36
01:00:37
01:00:42
01:00:47
01:00:49
01:00:52
01:00:54
01:00:58
01:00:58
01:00:59
01:01:03
01:01:04
01:01:05
01:01:08
01:01:11
01:01:14
01:01:16
01:01:18
01:01:21
01:01:25
01:01:26
01:01:29
01:01:31
01:01:34
01:01:35
01:01:37
01:01:39
01:01:41
01:01:43
01:01:45
01:01:45
01:01:47
01:01:54
01:01:56
01:01:59
01:01:59
01:02:00
01:02:01
01:02:02
01:02:02
01:02:03
01:02:06
01:02:09
01:02:17
01:02:20
01:02:22
01:02:25
01:02:25
01:02:26
01:02:29
01:02:34
01:02:38
01:02:43
01:02:47
01:02:51
01:02:55
01:03:00
01:03:03
01:03:04
01:03:07
01:03:08
01:03:12
01:03:14
01:03:16
01:03:18
01:03:21
01:03:26
01:03:29
01:03:30
01:03:33
01:03:37
01:03:43
01:03:51
01:03:55
01:03:59
01:04:01
01:04:05
01:04:05
01:04:06
01:04:10
01:04:13
01:04:17
01:04:19
01:04:21
01:04:24
01:04:25
01:04:28
01:04:30
01:04:31
01:04:31
01:04:36
01:04:38
01:04:42
01:04:46
01:04:48
01:04:52
01:04:53
01:04:55
01:05:00
01:05:01
01:05:04
01:05:07
01:05:10
01:05:13
01:05:16
01:05:17
01:05:19
01:05:23
01:05:25
01:05:32
01:05:33
01:05:34
01:05:38
01:05:38
01:05:40
01:05:44
01:05:46
01:05:47
01:05:49
01:05:51
01:05:53
01:05:54
01:05:55
01:05:57
01:06:01
01:06:01
01:06:07
01:06:10
01:06:10
01:06:12
01:06:13
01:06:15
01:06:16
01:06:19
01:06:20
01:06:25
01:06:27
01:06:29
01:06:30
01:06:32
01:06:34
01:06:39
01:06:41
01:06:42
01:06:44
01:06:47
01:06:48
01:06:51
01:06:52
01:06:57
01:07:01
01:07:05
01:07:11
01:07:15
01:07:20
01:07:25
01:07:28
01:07:32
01:07:34
01:07:35
01:07:39
01:07:41
01:07:46
01:07:51
01:07:53
01:07:55
01:07:57
01:07:58
01:07:59
01:08:01
01:08:02
01:08:05
01:08:06
01:08:08
01:08:14
01:08:15
01:08:17
01:08:19
01:08:24
01:08:29
01:08:34
01:08:36
01:08:40
01:08:45
01:08:48
01:08:52
01:08:57
01:09:02
01:09:06
01:09:11
01:09:13
01:09:16
01:09:18
01:09:21
01:09:22
01:09:28
01:09:30
01:09:33
01:09:38
01:09:40
01:09:43
01:09:45
01:09:48
01:09:51
01:09:54
01:09:56
01:09:57
01:09:59
01:10:01
01:10:04
01:10:06
01:10:08
01:10:10
01:10:11
01:10:14
01:10:16
01:10:18
01:10:20
01:10:21
01:10:22
01:10:22
01:10:23
01:10:27
01:10:29
01:10:32
01:10:33
01:10:34
01:10:36
01:10:38
01:10:40
01:10:40
01:10:43
01:10:45
01:10:48
01:10:53
01:10:56
01:10:58
01:11:00
01:11:02
01:11:06
01:11:10
01:11:11
01:11:15
01:11:17
01:11:18
01:11:18
01:11:20
01:11:22
01:11:24
01:11:26
01:11:27
01:11:28
01:11:28
01:11:29
01:11:32
01:11:33
01:11:34
01:11:35
01:11:39
01:11:42
01:11:45
01:11:49
01:11:52
01:11:55
01:11:55
01:11:56
01:12:00
01:12:02
01:12:03
01:12:07
01:12:11
01:12:15
01:12:16
01:12:19
01:12:20
01:12:23
01:12:24
01:12:26
01:12:27
01:12:28
01:12:30
01:12:31
01:12:32
01:12:33
01:12:34
01:12:34
01:12:38
01:12:41
01:12:43
01:12:44
01:12:45
01:12:48
01:12:49
01:12:50
01:12:52
01:12:58
01:13:02
01:13:05
01:13:08
01:13:10
01:13:14
01:13:15
01:13:17
01:13:19
01:13:22
01:13:24
01:13:26
01:13:27
01:13:28
01:13:41 I'm out.


