I love using PETL to read, transform, and validate datasets. Its API is simple and straightforward to use, and its documentation, while not perfect, is really good. Unfortunately, PETL’s data processing—especially loading data into SQL Server—is very slow—to the point of being completely unusable for anything but the smallest datasets.

Odo promises to be much faster at data loads, and seems dead-simple to use, but I cannot get it to work. It turns out that the package hasn’t been updated in five years, which is why it will not work with the modern Python version I am using. I just wasted an hour on it and will probably give it up completely.

I can’t find any other ETL tools that will do the data load for me. I have to rely on the SQL Server Import/Export Wizard, which usually works and is fast but is a one-shot thing because I don’t have SQL Server Integration Services at my disposal.

I suppose I will either have to fork Odo, get it working, and figure out how to get my fork of Odo into my project, or I will have to write my own import routine using SQL Alchemy. Both seem difficult to me and not worth the trouble. I’m not the database administrator on my project—I’m supposed to be the data analyst!