Bonus
ATtiny2313 USBtinyISP Notes
A short follow-up to How to Build a USBtinyISP. The original post covers the build; these are two things I end up doing often enough to write down.
For anything fuse-related, the authoritative reference is the ATtiny2313 datasheet from Microchip; don’t take anecdotal values from the internet at face value.
Reading current fuses Before you change fuses, dump what’s actually on the chip:
1avrdude -c usbasp -p t2313 \ 2 -U hfuse:r:-:h \ 3 -U lfuse:r:-:h :r:-:h means read, output to …
Continue Reading
: ATtiny2313 USBtinyISP NotesBonus
Advanced AM Modulation Analysis with Matplotlib
This post builds on AM Wave Generation and Plotting with Matplotlib. Once you can generate an AM waveform, the interesting question is: is it any good? That means measuring modulation index and inspecting the spectrum. Below is a small AdvancedAMAnalyzer class that handles both, plus a worked example of how sideband power redistributes with the modulation index.
A reusable analyzer class The class holds the sample rate and duration, and exposes three operations: generate a signal, calculate the …
Continue Reading
: Advanced AM Modulation Analysis with MatplotlibBonus
Advanced PySpark Performance Optimization Techniques
This builds on Performance Tuning on Apache Spark, which covers the fundamentals (spill, skew, shuffle, storage, serialization). Once those are under control, the next wins come from runtime-adaptive features. This post is a quick reference to the config keys, not a deep dive; read each one in the Spark configuration docs before flipping it.
Adaptive Query Execution (AQE) AQE re-plans the second half of a query at runtime based on statistics from completed shuffles. In current Spark (3.x) the …
Continue Reading
: Advanced PySpark Performance Optimization TechniquesBonus
PySpark Design Patterns Quick Reference
Minimal runnable snippets for the five core patterns. For the why and when, see Implementing Design Patterns in PySpark Data Pipelines and Advanced PySpark Design Patterns.
Factory Pattern Create data sources without specifying exact types:
1from abc import ABC, abstractmethod 2 3class DataSourceFactory(ABC): 4 @abstractmethod 5 def create_data_source(self): 6 pass 7 8class CSVFactory(DataSourceFactory): 9 def create_data_source(self): 10 return CSVDataSource() 11 12class …
Continue Reading
: PySpark Design Patterns Quick ReferenceBonus
Advanced PySpark Design Patterns: Implementation Examples
This builds on basic design patterns in PySpark pipelines (factory, singleton, builder, observer, pipeline). Once those are familiar, three more patterns cover more complex cases that come up in production: switching algorithms at runtime, adding cross-cutting concerns, and sharing skeleton logic across pipeline variants.
Strategy: swap algorithms at runtime The Strategy pattern defines a family of algorithms, encapsulates each one, and makes them interchangeable. In data pipelines this is …
Continue Reading
: Advanced PySpark Design Patterns: Implementation ExamplesBonus
PySpark Design Patterns for Data Pipelines
The five most useful design patterns for PySpark data pipelines are Factory (swap data sources without changing pipeline code), Singleton (one shared SparkSession), Builder (compose transformations step by step), Observer (monitor pipeline events), and Pipeline (chain stages together). This tutorial shows each with a complete, runnable PySpark example.
If you want to write PySpark data pipelines that stay clean as they grow, design patterns are the most reliable tool to reach for. Pipelines get …
Continue Reading
: PySpark Design Patterns for Data Pipelines