Python apps with LLM building blocks
Episode Deep Dive
Guest Introduction and Background
Vincent Warmerdam is a data scientist, educator, and developer advocate currently working at Marimo, a modern notebook environment for Python. He's best known for his educational work through CalmCode.io, where he provides free Python tutorials, and his YouTube channels covering both Python programming and ergonomic keyboards (which has grown to over 5,000 subscribers). Vincent previously worked at Explosion (makers of spaCy) and has extensive experience in machine learning and natural language processing. He's a strong advocate for pragmatic, boring technology that just works, and has a passion for building tools that make developers more productive. His episode on spaCy from the previous year was the number one most downloaded Talk Python episode of that year.
What to Know If You're New to Python
If you're new to Python and want to get the most out of this episode's content on integrating LLMs into Python applications, here are the key concepts to understand:
- Decorators and functions: LLM integration often uses Python decorators to wrap functions with caching, validation, or API calls. Understanding how functions work and how decorators modify their behavior is essential.
- Type hints and Pydantic: Modern LLM work relies heavily on type annotations to define structured outputs. Familiarity with Python's type hint system (strings, integers, lists, optional types) will help you understand how to constrain LLM responses.
- Async programming basics: While not deeply covered, many LLM APIs benefit from async/await patterns for better performance when making multiple API calls.
- Caching concepts: Understanding what caching is and why it matters (avoiding redundant expensive operations) is central to building cost-effective LLM applications.
Key Points and Takeaways
1. Treat LLMs as Unreliable APIs That Need Defensive Programming
LLMs represent a fundamentally different kind of building block compared to traditional functions in your code. Unlike normal functions where you put something in and predictably get the same thing out, LLMs are stochastic - you can put the same input in twice and get different outputs. This means you need to think defensively, putting boundaries and validation around LLM calls. The core principle is to acknowledge that LLMs are weird building blocks that require special handling, including caching, validation, and careful monitoring of what goes in and what comes out. Vincent emphasized this is not just about using the right tools, but about developing the right mindset for working with these probabilistic systems.
Links and Tools:
- LLM Building Blocks for Python Course
- Pydantic for data validation
- Instructor library for retry mechanics with validation
2. DiskCache Is Essential for LLM Development
DiskCache is a SQLite-based caching library that should be in every LLM developer's toolkit. It allows you to cache LLM responses to disk so they persist across program restarts, preventing you from making the same expensive API call twice. Vincent and Michael both praised it as "unbelievably good" - it works like a simple dictionary or decorator but stores everything in SQLite, making inspection easy. You can set time-to-live values, add custom keys based on model/prompt/settings tuples, and even store multiple outputs for the same input by adding an integer to the key. This not only saves money but dramatically speeds up development iteration since you're not waiting for the same API calls repeatedly.
Links and Tools:
- DiskCache
- SQLite-backed persistent caching
- Works across application restarts
- Decorator pattern for easy integration
3. Simon Willison's LLM Library Provides Boring, Reliable Abstractions
Vincent called Simon Willison's LLM library "the most boring" as a compliment - it does a few things in a very predictable, unsurprising way without walling you into complex abstractions. Originally designed as a command-line utility, it has a clean Python API that makes it trivial to swap between different LLM providers (OpenAI, Anthropic, Mistral, etc.) through a plugin ecosystem. This means individual providers can maintain their own plugins rather than one maintainer drowning in compatibility work. The library handles the basics extremely well and is particularly good for rapid prototyping and experimentation, making it Vincent's go-to tool when he wants to quickly test something with LLMs.
Links and Tools:
- LLM by Simon Willison
- Plugin ecosystem for multiple providers
- Command-line and Python API
- SQLite logging of prompts and responses
4. Structured Output With Pydantic Transforms LLMs Into Programmable Components
One of the biggest challenges with early LLMs was that text went in and unpredictable text came out, making it hard to build reliable software. Modern LLMs are now trained on structured output tasks, allowing you to define a Pydantic model and receive guaranteed JSON that matches your schema. You can specify you want a list of strings, a classification from specific values using literals, or complex nested objects. Vincent noted that while Pydantic can do more than the JSON schema spec supports, and simpler structures work more reliably than deeply nested ones, this capability fundamentally changes LLMs from chat interfaces into programmable components you can integrate into your software architecture.
Links and Tools:
- Pydantic
- JSON schema generation
- Type-safe LLM outputs
- Validation and parsing
5. The Instructor Library Uses Validation Errors as Prompts for Self-Correction
Instructor takes Pydantic validation a step further with a clever trick: if an LLM returns something that doesn't validate against your Pydantic model, Instructor takes the validation error message, combines it with what the LLM returned, and asks the LLM to try again with the hint. This retry mechanism can handle cases where your Pydantic validation is more sophisticated than the JSON schema can express. Vincent noted this was more critical in earlier LLM eras when structured output wasn't as reliable, but it's still valuable for lightweight models or complex validation rules. The library uses the tenacity package under the hood to manage retries with limits.
Links and Tools:
- Instructor
- Validation retry mechanism
- Works with Pydantic models
- Uses tenacity for retry logic
6. LLMs Often Lose to Traditional ML When Properly Evaluated
Vincent shared stories from PyData conferences where data scientists, given budget to experiment with LLMs, ended up putting scikit-learn or spaCy into production instead. The mandate from above to "do AI" gave them permission to build proper evaluation frameworks and compare approaches rigorously. When they tested LLMs against lightweight traditional models with enough training data (which they needed anyway for evaluation), the traditional models often performed as well or better while being faster, cheaper, and deterministic. This isn't a failure of LLMs but a reminder that evaluation methodology matters more than hype, and sometimes the boring solution is the right one.
Links and Tools:
- scikit-learn
- spaCy
- Importance of evaluation frameworks
- Cost and performance trade-offs
7. Open Router Provides One API Key for All LLM Models
Open Router is a service that routes to any LLM model you want with a single API key, making it trivial to experiment with the latest models as soon as they're released. If you've set up caching and evaluation functions properly, switching between models is just changing a string. They aggregate multiple GPU providers competing for the lowest prices and add about a 5% fee. This is particularly valuable for rapid experimentation - Vincent uses it to quickly test whether a 7B, 14B, or larger model hits the right quality/cost sweet spot for a specific task. The service has direct integration with tools like Cline for easy access to hundreds of models.
Links and Tools:
- Open Router
- Single API for multiple LLM providers
- Competitive GPU provider pricing
- Easy model comparison and experimentation
8. Run Your Own Models Locally With Ollama or LM Studio
For smaller models or privacy-sensitive applications, running models locally on your own hardware is increasingly practical. Ollama is a command-line utility that makes it easy to download and run models, providing an OpenAI-compatible API endpoint. LM Studio offers a more visual UI for discovering, configuring, and running models, then exposes them via API. Michael runs the GPT OSS 20 billion parameter model on his M2 Pro Mac mini with 32GB RAM as his default LLM. Running local models eliminates API costs for experimentation and gives you control, though quality varies significantly across open-source models. Both tools support running models that can then be accessed through the LLM library or any OpenAI-compatible client.
Links and Tools:
9. SmartFunc Pattern Uses Docstrings as Prompts
Vincent demonstrated his SmartFunc library which turns Python functions into LLM calls using decorators. You define a function with typed parameters, add a decorator specifying which LLM backend to use, and write your prompt in the docstring. The function parameters can be referenced in the docstring (which can be a Jinja template), and return types can enforce structured output. This pattern makes it very quick to prototype LLM functionality - you can create a summarization function, a classification function, or any other LLM-powered operation with minimal boilerplate. While Vincent doesn't necessarily recommend others use SmartFunc itself, it demonstrates how you can build your own syntactic sugar on top of foundational libraries like LLM.
Links and Tools:
- SmartFunc
- Decorator-based LLM integration
- Docstring as prompt
- Type-driven structured output
10. Pydantic AI Uses Types to Handle Conversational State
Pydantic AI and similar higher-order frameworks use Python's type system in sophisticated ways to manage conversational flows. For example, in a pizza ordering bot, you might define a type that's either a Pizza object (with size, toppings, etc.) or a string representing a follow-up question. The LLM decides whether it has enough information to return the structured Pizza object or needs to ask the user for more details by returning a string. This lets you handle complex form-filling conversations where different fields might be optional or conditional, all driven by type definitions rather than manual state management logic. Vincent emphasized that while these frameworks are powerful, keeping the base layer boring and focusing on well-defined types can get you very far.
Links and Tools:
- Pydantic AI
- Type-driven conversation management
- Union types for flow control
- Created by Samuel Colvin and team
11. Cline Code Assistant Emphasizes Plan Mode vs Execute Mode
Cline (formerly Claude Dev) is an open-source VS Code extension that distinguishes between planning what to do and actually executing changes, with a UI that shows your context window usage and costs in real-time. Unlike some AI assistants, Cline makes you very aware of the cumulative cost as you work (the Fibonacci sequence effect - each interaction costs what you've spent plus more). It's model-agnostic, allowing you to bring your own API keys, and has direct integration with Open Router. The emphasis on plan mode before execution, combined with cost visibility, encourages more thoughtful use of AI assistance rather than just mashing the "continue" button. They recently introduced a CLI as well for command-line usage.
Links and Tools:
- Cline
- Open-source code assistant
- Plan vs execute workflow
- Real-time cost tracking
- Model-agnostic with API key support
12. Evaluation and Caching Enable Rapid Model Comparison
The combination of proper caching and evaluation functions creates a powerful workflow for LLM development. Once you've built a cached function and an evaluation approach (which might be manual inspection, automated metrics, or statistical testing), you can iterate extremely quickly. You can run the same prompt through different models, different temperature settings, or different prompting strategies, and because of caching, you only pay for each unique combination once. Vincent emphasized being "really strict about evaluations" as the methodology that lets you discover that sometimes scikit-learn beats an LLM, or a 7B model is good enough, saving significant costs.
Links and Tools:
- Importance of evaluation frameworks
- Statistical testing approaches
- A/B testing for LLM applications
- Cost optimization through comparison
13. Marimo Notebooks Enable LLM Prototyping With UI
Vincent now works at Marimo, a modern notebook environment where notebooks are pure Python files under the hood, enabling proper version control, pytest integration, and dependency management via uv. For LLM work specifically, Marimo excels at blending Python code with interactive UI elements, making it easy to build quick prototypes with text boxes, sliders, and other controls. The reactivity model (using abstract syntax trees to understand cell dependencies) prevents the hidden state problems that plague Jupyter. Vincent uses Marimo for all his rapid LLM prototyping because he can quickly wire up an LLM call to UI elements and iterate. It also supports running notebooks as command-line scripts and has native support for unit tests within notebooks.
Links and Tools:
- Marimo
- Reactive notebook environment
- Pure Python file format
- Integrated UI components for LLM experimentation
- Git-friendly and testable
Interesting Quotes and Stories
"I just found the LLM library by Simon Willison by far to be the most boring. And I mean that really in a good way, just unsurprising, only does a few things. The few things that it does is just in a very predictable way." -- Vincent Warmerdam
"If you haven't used DiskCache before, it definitely feels like one of these libraries that because I have it in my back pocket, it just feels like I can tackle more problems." -- Vincent Warmerdam
"Your previous experience as a proper engineer will still help you write good LLM software. However, from a similar perspective, I also think that we do have this generation of like data scientists... thinking analytically being quite critical of the output of an algorithm. That's also like a good bone to have in this day and age." -- Vincent Warmerdam
"I've heard a story a bunch of times now that because of the hype around LLMs and AI, after it was implemented, after they did all the benchmarks, it turns out that AI is the reason that scikit-learn is now in production in a bunch of places." -- Vincent Warmerdam
"If you can just focus on the right types and make sure that all that stuff is kind of sane, then you also keep the extractions at bay which is I think also convenient especially early on." -- Vincent Warmerdam
"Definitely expose yourself to LLMs. And if that inspires you, that's great. But also try to not overly rely on it either... I'm building flashcard apps for myself so I'm still kind of in the loop." -- Vincent Warmerdam
"If you have better tools you should also have better ideas and if that's not the case then something is wrong because that's then you get into the self-learned helplessness territory." -- Vincent Warmerdam
"I would be really disappointed if I'm the only person making packages that I've never made before that are definitely trying to reach new heights... Try to inspire yourself a bit more and do more inspirational stuff so more cool things happen on my timeline." -- Vincent Warmerdam
Key Definitions and Terms
Structured Output: The ability of modern LLMs to return responses in a guaranteed format (like JSON matching a specific schema) rather than free-form text, enabling reliable integration into software systems.
DiskCache: A Python library that provides a persistent, SQLite-backed caching system that survives program restarts, particularly valuable for expensive operations like LLM API calls.
Plugin Ecosystem: An architectural pattern where a core library provides extension points for third parties to add support for different services (like different LLM providers) without the core maintainer handling all integrations.
Pydantic: A Python library for data validation and settings management using Python type annotations, commonly used to define structured schemas for LLM outputs.
Validation Retry: A pattern where failed validation errors are fed back to an LLM as hints to try generating a valid response again, implemented by libraries like Instructor.
Evaluation Framework: A systematic approach to testing and comparing LLM outputs, essential for determining whether an LLM approach is better than traditional methods or whether one model performs better than another.
Open Router: A service that provides a single API endpoint and key to access dozens of different LLM models from various providers, simplifying model experimentation.
Ollama: A command-line tool for running large language models locally on your own hardware with an OpenAI-compatible API.
Marimo: A modern Python notebook environment where notebooks are pure Python files with reactive cell dependencies, designed to avoid hidden state problems.
Learning Resources
Before diving into the learning resources, these courses and materials will help you build on the concepts discussed in this episode. Whether you're looking to integrate LLMs into your applications or strengthen your Python foundations, these resources provide hands-on, practical knowledge.
LLM Building Blocks for Python: Vincent Warmerdam's course that teaches you to integrate large language models into your Python applications with practical, code-first techniques for real-world LLM development including structured outputs, caching, async pipelines, and production-ready patterns.
Just Enough Python for Data Scientists: This course provides essential Python and software engineering practices for data scientists, covering clean functions, importable packages, git workflows, debugging, and reproducible environments - all foundations that make LLM work more reliable.
Python for Absolute Beginners: If you're new to Python and want to understand the fundamentals before diving into LLM integration, this comprehensive beginner course covers all the core concepts at a pace designed for those just starting their programming journey.
Rock Solid Python with Python Typing: Since type hints are central to modern LLM work with Pydantic and structured outputs, this course teaches you the ins-and-outs of Python's type system and how frameworks leverage it.
Data Science Jumpstart with 10 Projects: Matt Harrison's hands-on course teaches practical data science skills through 10 diverse projects, building the analytical mindset Vincent emphasized as valuable for critical evaluation of LLM outputs versus traditional approaches.
Overall Takeaway
This episode challenges the prevailing narrative that LLMs should replace traditional programming approaches. Instead, Vincent Warmerdam presents a more nuanced view: LLMs are powerful but unpredictable tools that require defensive programming, rigorous evaluation, and the wisdom to know when a boring, traditional solution is actually better. The key insight is that LLM integration is as much about mindset and methodology as it is about tools - you need caching to avoid waste, evaluation frameworks to measure success, type systems to constrain outputs, and the discipline to keep learning rather than falling into "learned helplessness."
The episode is packed with practical tools (DiskCache, Simon Willison's LLM library, Pydantic, Instructor, Open Router) but the deeper message is about maintaining your capabilities as a developer even as automation improves. Vincent's call to action is inspiring: don't just use AI tools to make what you already make faster; use them to build things you couldn't build before. Be deliberate about maintaining your own knowledge through flashcards and continuous learning. Evaluate rigorously enough that you sometimes choose scikit-learn over an LLM. And most importantly, guard against the WALL-E future where convenience leads to helplessness - instead, channel the mandate for AI from above into permission to do better engineering with proper testing and evaluation. The future belongs not to those who can push the "continue" button most effectively, but to those who can thoughtfully integrate these new capabilities while maintaining their fundamental problem-solving skills.
Links from the show
Vincent on Mastodon: @koaning
LLM Building Blocks for Python Co-urse: training.talkpython.fm
Top Talk Python Episodes of 2024: talkpython.fm
LLM Usage - Datasette: llm.datasette.io
DiskCache - Disk Backed Cache (Documentation): grantjenks.com
smartfunc - Turn docstrings into LLM-functions: github.com
Ollama: ollama.com
LM Studio - Local AI: lmstudio.ai
marimo - A Next-Generation Python Notebook: marimo.io
Pydantic: pydantic.dev
Instructor - Complex Schemas & Validation (Python): python.useinstructor.com
Diving into PydanticAI with marimo: youtube.com
Cline - AI Coding Agent: cline.bot
OpenRouter - The Unified Interface For LLMs: openrouter.ai
Leafcloud: leaf.cloud
OpenAI looks for its "Google Chrome" moment with new Atlas web browser: arstechnica.com
Watch this episode on YouTube: youtube.com
Episode #528 deep-dive: talkpython.fm/528
Episode transcripts: talkpython.fm
Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong
---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython
Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython
Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy
Episode Transcript
Collapse transcript
00:00
00:05
00:11
00:17
00:23
00:44
00:48
00:54
00:55
00:59
01:02
01:06
01:10
01:10
01:15
01:20
01:24
01:27
01:28
01:30
01:34
01:36
01:41
01:45
01:50
01:52
02:01
02:06
02:10
02:15
02:20
02:22
02:25
02:28
02:29
02:31
02:34
02:38
02:42
02:50
02:56
03:03
03:08
03:13
03:21
03:26
03:32
03:37
03:42
03:47
03:52
03:58
04:02
04:09
04:39
04:43
04:47
04:52
04:56
05:01
05:06
05:09
05:15
05:19
05:24
05:28
05:33
05:37
05:41
05:46
05:50
05:56
06:01
06:07
06:12
06:14
06:17
06:19
06:20
06:21
06:23
06:25
06:27
06:30
06:33
06:40
06:45
06:51
06:55
06:59
07:03
07:05
07:12
07:16
07:21
07:24
07:29
07:34
07:38
07:42
07:46
07:53
07:58
08:02
08:04
08:07
08:12
08:13
08:13
08:16
08:21
08:23
08:28
08:29
08:32
08:34
08:35
08:36
08:39
08:41
08:47
08:51
08:55
08:59
09:08
09:12
09:13
09:18
09:21
09:26
09:29
09:33
09:36
09:37
09:41
09:44
09:45
09:47
09:48
09:49
09:51
09:56
10:00
10:04
10:08
10:11
10:12
10:12
10:16
10:19
10:24
10:27
10:29
10:32
10:36
10:37
10:40
10:43
10:46
10:47
10:51
10:53
10:53
10:55
11:00
11:02
11:04
11:07
11:10
11:15
11:17
11:23
11:25
11:27
11:30
11:34
11:38
11:41
11:45
11:47
11:51
11:56
12:00
12:03
12:09
12:14
12:18
12:19
12:47
12:52
12:56
12:57
12:59
13:02
13:03
13:08
13:10
13:13
13:16
13:19
13:23
13:25
13:27
13:29
13:33
13:35
13:37
13:41
13:45
13:51
13:58
14:02
14:06
14:11
14:18
14:20
14:25
14:29
14:31
14:39
14:41
14:47
14:49
14:53
14:54
14:58
15:01
15:02
15:06
15:09
15:15
15:17
15:20
15:24
15:27
15:35
15:39
15:45
15:48
15:53
15:58
16:02
16:08
16:13
16:18
16:24
16:29
16:36
16:44
16:54
16:58
17:01
17:03
17:05
17:08
17:10
17:16
17:19
17:23
17:27
17:29
17:31
17:33
17:34
17:38
17:41
17:42
17:43
17:47
17:50
17:52
17:53
17:57
18:00
18:03
18:04
18:06
18:09
18:10
18:12
18:14
18:16
18:17
18:22
18:25
18:30
18:35
18:38
18:40
18:44
18:49
18:55
18:56
19:01
19:05
19:09
19:13
19:16
19:19
19:22
19:25
19:27
19:27
19:29
19:34
19:37
19:37
19:39
19:42
19:44
19:47
19:48
19:50
19:51
19:53
19:56
19:58
20:01
20:02
20:04
20:11
20:12
20:15
20:21
20:23
20:23
20:27
20:27
20:28
20:28
20:29
20:32
20:36
20:39
20:42
20:45
20:47
20:51
20:51
20:53
20:53
20:55
20:56
20:59
21:04
21:08
21:15
21:17
21:20
21:22
21:25
21:28
21:29
21:34
21:36
21:38
21:40
21:44
21:47
21:48
21:51
21:55
21:56
21:58
22:02
22:08
22:10
22:10
22:12
22:17
22:22
22:26
22:30
22:33
22:36
22:39
22:43
22:44
22:47
22:49
22:52
22:53
22:56
22:59
23:01
23:05
23:07
23:08
23:12
23:17
23:19
23:25
23:27
23:29
23:34
23:39
23:40
23:45
23:45
23:48
23:52
23:54
23:57
24:04
24:08
24:12
24:17
24:22
24:27
24:32
24:34
24:36
24:38
24:40
24:44
24:45
24:46
24:49
24:52
24:53
24:56
24:59
25:03
25:04
25:06
25:07
25:10
25:12
25:15
25:18
25:20
25:22
25:23
25:25
25:27
25:33
25:33
25:36
25:37
25:37
25:40
25:43
25:46
25:47
25:49
25:53
25:56
25:58
25:59
26:04
26:05
26:13
26:16
26:17
26:22
26:26
26:27
26:31
26:34
26:36
26:39
26:40
26:41
26:42
26:45
26:47
26:48
26:52
26:54
26:56
26:56
27:00
27:01
27:05
27:10
27:16
27:19
27:23
27:27
27:28
27:32
27:35
27:37
27:40
27:43
27:46
27:46
27:49
27:49
27:51
27:53
27:55
27:56
27:58
28:01
28:06
28:10
28:11
28:15
28:20
28:24
28:25
28:27
28:35
28:36
28:38
28:41
28:42
28:45
28:46
28:56
29:00
29:01
29:02
29:06
29:10
29:13
29:17
29:20
29:21
29:23
29:25
29:26
29:29
29:32
29:32
29:33
29:37
29:40
29:44
29:47
29:51
29:53
29:56
30:00
30:03
30:06
30:06
30:08
30:10
30:11
30:13
30:15
30:18
30:22
30:23
30:26
30:31
30:36
30:39
30:39
30:41
30:46
30:46
30:49
30:50
30:51
30:55
30:56
30:58
30:59
31:00
31:01
31:01
31:04
31:05
31:07
31:08
31:13
31:20
31:23
31:25
31:30
31:31
31:37
31:39
31:39
31:40
31:41
31:44
31:45
31:48
31:51
31:52
31:54
31:59
32:05
32:09
32:12
32:15
32:19
32:24
32:32
32:33
32:35
32:46
32:47
32:49
32:50
32:53
32:54
32:57
33:01
33:03
33:04
33:06
33:08
33:09
33:10
33:14
33:14
33:20
33:28
33:29
33:30
33:34
33:40
33:41
33:43
33:45
33:45
33:46
33:47
33:49
33:51
33:53
33:54
33:55
33:56
34:01
34:03
34:07
34:11
34:15
34:20
34:24
34:29
34:32
34:36
34:40
34:44
34:47
34:50
34:55
35:01
35:03
35:06
35:09
35:13
35:17
35:22
35:26
35:31
35:37
35:38
35:42
35:45
35:49
35:51
35:54
35:57
35:59
36:05
36:07
36:10
36:13
36:19
36:24
36:27
36:32
36:34
36:37
36:39
36:42
36:48
36:52
36:57
37:00
37:03
37:07
37:13
37:19
37:24
37:30
37:35
37:42
37:48
37:54
38:01
38:07
38:13
38:19
38:24
38:29
38:31
38:35
38:36
38:40
38:42
38:45
38:52
38:53
38:55
38:57
39:02
39:05
39:07
39:11
39:13
39:16
39:21
39:24
39:26
39:27
39:30
39:32
39:36
39:38
39:39
39:41
39:45
39:50
39:52
39:56
40:00
40:01
40:05
40:11
40:15
40:16
40:21
40:25
40:28
40:33
40:38
40:43
40:50
40:56
41:00
41:06
41:11
41:13
41:18
41:22
41:23
41:26
41:30
41:33
41:35
41:38
41:43
41:46
41:49
41:51
41:56
41:57
42:01
42:02
42:06
42:09
42:15
42:20
42:23
42:30
42:34
42:38
42:42
42:45
42:50
42:54
42:56
43:01
43:04
43:07
43:09
43:15
43:17
43:23
43:24
43:28
43:29
43:31
43:34
43:39
43:41
43:45
43:49
43:54
43:58
44:01
44:02
44:05
44:11
44:16
44:21
44:26
44:29
44:34
44:38
44:44
44:49
44:54
45:02
45:07
45:11
45:16
45:24
45:30
45:35
45:40
45:45
45:51
45:53
45:54
45:56
45:59
46:00
46:01
46:05
46:05
46:08
46:12
46:13
46:15
46:16
46:19
46:20
46:23
46:25
46:26
46:29
46:31
46:34
46:37
46:39
46:42
46:43
46:45
46:46
46:48
46:51
46:52
46:56
46:59
47:02
47:04
47:06
47:10
47:11
47:13
47:15
47:19
47:22
47:25
47:26
47:30
47:33
47:38
47:42
47:43
47:44
47:48
47:52
47:53
47:59
48:03
48:07
48:08
48:13
48:18
48:22
48:25
48:31
48:35
48:41
48:45
48:51
48:56
48:59
49:05
49:10
49:12
49:15
49:18
49:21
49:27
49:30
49:32
49:34
49:36
49:39
49:41
49:43
49:47
49:49
49:53
49:55
49:58
50:00
50:02
50:04
50:06
50:11
50:18
50:22
50:27
50:30
50:34
50:36
50:39
50:45
50:47
50:48
50:54
50:58
50:59
51:04
51:06
51:12
51:13
51:16
51:18
51:21
51:24
51:27
51:29
51:31
51:32
51:34
51:36
51:42
51:42
51:46
51:49
51:52
51:54
51:56
52:00
52:04
52:06
52:09
52:13
52:16
52:19
52:25
52:30
52:33
52:37
52:38
52:41
52:44
52:45
52:46
52:47
52:48
52:49
52:51
52:52
52:54
52:56
52:59
53:01
53:02
53:05
53:09
53:13
53:15
53:18
53:21
53:25
53:29
53:33
53:37
53:41
53:46
53:49
53:53
53:57
54:01
54:04
54:08
54:11
54:13
54:15
54:18
54:19
54:21
54:22
54:25
54:27
54:33
54:34
54:37
54:38
54:41
54:41
54:43
54:45
54:47
54:48
54:49
54:50
54:51
54:52
54:53
54:54
54:58
54:59
54:59
55:00
55:05
55:10
55:17
55:18
55:22
55:29
55:34
55:38
55:39
55:42
55:45
55:47
55:49
55:51
55:54
55:58
56:01
56:02
56:04
56:07
56:11
56:15
56:19
56:20
56:22
56:24
56:29
56:33
56:35
56:39
56:42
56:43
56:43
56:46
56:47
56:49
56:53
56:56
56:59
57:01
57:03
57:06
57:10
57:11
57:15
57:21
57:24
57:30
57:34
57:40
57:46
57:51
57:56
58:00
58:04
58:09
58:15
58:22
58:29
58:34
58:38
58:42
58:46
58:52
58:57
59:02
59:07
59:08
59:15
59:16
59:18
59:19
59:23
59:24
59:27
59:30
59:32
59:35
59:37
59:40
59:44
59:46
59:49
59:50
59:54
59:56
59:59
01:00:04
01:00:06
01:00:12
01:00:18
01:00:22
01:00:27
01:00:30
01:00:39
01:00:45
01:00:51
01:00:56
01:01:01
01:01:06
01:01:09
01:01:11
01:01:13
01:01:14
01:01:17
01:01:18
01:01:19
01:01:19
01:01:20
01:01:22
01:01:24
01:01:29
01:01:32
01:01:36
01:01:39
01:01:42
01:01:45
01:01:47
01:01:51
01:01:54
01:02:00
01:02:01
01:02:02
01:02:04
01:02:05
01:02:06
01:02:07
01:02:09
01:02:14
01:02:17
01:02:19
01:02:21
01:02:23
01:02:28
01:02:31
01:02:37
01:02:41
01:02:45
01:02:51
01:02:53
01:02:54
01:02:57
01:02:57
01:03:02
01:03:03
01:03:06
01:03:09
01:03:13
01:03:15
01:03:17
01:03:18
01:03:24
01:03:29
01:03:32
01:03:34
01:03:39
01:03:42
01:03:47
01:03:48
01:03:55
01:03:57
01:04:02
01:04:07
01:04:11
01:04:13
01:04:14
01:04:16
01:04:18
01:04:21
01:04:24
01:04:25
01:04:26
01:04:30
01:04:32
01:04:34
01:04:35
01:04:37
01:04:42
01:04:43
01:04:47
01:04:49
01:04:51
01:04:54
01:04:56
01:05:00
01:05:01
01:05:01
01:05:04
01:05:07
01:05:09
01:05:10
01:05:13
01:05:18
01:05:21
01:05:24
01:05:25
01:05:27
01:05:29
01:05:31
01:05:35
01:05:40
01:05:45
01:05:49
01:05:53
01:05:58
01:06:04
01:06:09
01:06:14
01:06:19
01:06:24
01:06:29
01:06:33
01:06:37
01:06:41
01:06:43
01:06:46
01:06:50
01:06:52
01:06:53
01:06:53
01:06:54
01:06:54
01:06:56
01:06:57
01:06:58
01:07:02
01:07:06
01:07:10
01:07:12
01:07:16
01:07:18
01:07:21
01:07:22
01:07:26
01:07:29
01:07:32
01:07:36
01:07:38
01:07:41
01:07:43
01:07:46
01:07:48
01:07:51
01:07:52
01:07:57
01:08:01
01:08:03
01:08:04
01:08:05
01:08:09
01:08:13
01:08:18
01:08:21
01:08:27
01:08:28
01:08:33
01:08:37
01:08:42
01:08:45
01:08:51
01:08:56
01:09:00
01:09:04
01:09:09
01:09:15
01:09:20
01:09:25
01:09:29
01:09:35
01:09:38
01:09:41
01:09:46
01:09:50
01:09:51
01:09:53
01:09:58
01:09:59
01:10:04
01:10:08
01:10:13
01:10:18
01:10:23
01:10:25
01:10:30
01:10:35
01:10:40
01:10:43
01:10:46
01:10:47
01:10:51
01:10:52
01:10:52
01:10:56
01:10:59
01:11:00
01:11:02
01:11:04
01:11:09
01:11:12
01:11:15
01:11:19
01:11:21
01:11:22
01:11:24
01:11:25
01:11:26
01:11:28
01:11:31
01:11:31
01:11:35
01:11:41
01:11:42
01:11:45
01:11:47
01:11:48
01:11:51
01:11:53
01:11:54
01:11:57
01:11:59
01:12:00
01:12:04
01:12:06
01:12:10
01:12:12
01:12:15
01:12:19
01:12:21
01:12:28
01:12:33
01:12:38
01:12:43
01:12:48
01:12:52
01:12:56
01:13:00
01:13:04
01:13:08
01:13:11
01:13:16
01:13:20
01:13:24
01:13:29
01:13:35
01:13:39
01:13:44
01:13:49
01:13:51
01:13:54
01:13:56
01:13:57
01:14:01
01:14:03
01:14:06
01:14:10
01:14:12
01:14:14
01:14:23
01:14:26
01:14:29
01:14:34
01:14:35
01:14:38
01:14:38
01:14:39
01:14:40
01:14:44
01:14:48
01:14:49
01:14:50
01:14:50
01:14:51
01:14:54
01:14:55
01:14:57
01:14:59
01:15:00
01:15:04
01:15:06
01:15:11
01:15:15
01:15:18
01:15:22
01:15:26
01:15:31
01:15:33
01:15:38
01:15:52
01:15:57
01:16:03
01:16:07
01:16:09
01:16:11
01:16:12
01:16:13
01:16:25
01:16:36
01:16:40


