
Image by Author
# Introduction
Developers use pandas for data manipulation, but it can be slow, especially with large datasets. Because of this, many are looking for faster and lighter alternatives. These options keep the core features needed for analysis while focusing on speed, lower memory use, and simplicity. In this article, we look at five lightweight alternatives to pandas you can try.
# 1. DuckDB
DuckDB is like SQLite for analytics. You can run SQL queries directly on comma-separated values (CSV) files. It is useful if you know SQL or work with machine learning pipelines. Install it with:
We will use the Titanic dataset and run a simple SQL query on it like this:
import duckdb
url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv"
# Run SQL query on the CSV
result = duckdb.query(f"""
SELECT sex, age, survived
FROM read_csv_auto('{url}')
WHERE age > 18
""").to_df()
print(result.head())
Output:
sex age survived
0 male 22.0 0
1 female 38.0 1
2 female 26.0 1
3 female 35.0 1
4 male 35.0 0
DuckDB runs the SQL query directly on the CSV file and then converts the output into a DataFrame. You get SQL speed with Python flexibility.
# 2. Polars
Polars is one of the most popular data libraries available today. It is implemented in the Rust language and is exceptionally fast with minimal memory requirements. The syntax is also very clean. Let’s install it using pip:
Now, let’s use the Titanic dataset to cover a simple example:
import polars as pl
# Load dataset
url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv"
df = pl.read_csv(url)
result = df.filter(pl.col("age") > 40).select(["sex", "age", "survived"])
print(result)
Output:
shape: (150, 3)
┌────────┬──────┬──────────┐
│ sex ┆ age ┆ survived │
│ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ i64 │
╞════════╪══════╪══════════╡
│ male ┆ 54.0 ┆ 0 │
│ female ┆ 58.0 ┆ 1 │
│ female ┆ 55.0 ┆ 1 │
│ male ┆ 66.0 ┆ 0 │
│ male ┆ 42.0 ┆ 0 │
│ … ┆ … ┆ … │
│ female ┆ 48.0 ┆ 1 │
│ female ┆ 42.0 ┆ 1 │
│ female ┆ 47.0 ┆ 1 │
│ male ┆ 47.0 ┆ 0 │
│ female ┆ 56.0 ┆ 1 │
└────────┴──────┴──────────┘
Polars reads the CSV, filters rows based on an age condition, and selects a subset of the columns.
# 3. PyArrow
PyArrow is a lightweight library for columnar data. Tools like Polars use Apache Arrow for speed and memory efficiency. It is not a full substitute for pandas but is excellent for reading files and preprocessing. Install it with:
For our example, let’s use the Iris dataset in CSV form as follows:
import pyarrow.csv as csv
import pyarrow.compute as pc
import urllib.request
# Download the Iris CSV
url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv"
local_file = "iris.csv"
urllib.request.urlretrieve(url, local_file)
# Read with PyArrow
table = csv.read_csv(local_file)
# Filter rows
filtered = table.filter(pc.greater(table['sepal_length'], 5.0))
print(filtered.slice(0, 5))
Output:
pyarrow.Table
sepal_length: double
sepal_width: double
petal_length: double
petal_width: double
species: string
----
sepal_length: [[5.1,5.4,5.4,5.8,5.7]]
sepal_width: [[3.5,3.9,3.7,4,4.4]]
petal_length: [[1.4,1.7,1.5,1.2,1.5]]
petal_width: [[0.2,0.4,0.2,0.2,0.4]]
species: [["setosa","setosa","setosa","setosa","setosa"]]
PyArrow reads the CSV and converts it into a columnar format. Each column’s name and type are listed in a clear schema. This setup makes it fast to inspect and filter large datasets.
# 4. Modin
Modin is for anyone who wants faster performance without learning a new library. It uses the same pandas API but runs operations in parallel. You do not need to change your existing code; just update the import. Everything else works like normal pandas. Install it with pip:
For better understanding, let’s try a small example using the same Titanic dataset as follows:
import modin.pandas as pd
url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv"
# Load the dataset
df = pd.read_csv(url)
# Filter the dataset
adults = df[df["age"] > 18]
# Select only a few columns to display
adults_small = adults[["survived", "sex", "age", "class"]]
# Display result
adults_small.head()
Output:
survived sex age class
0 0 male 22.0 Third
1 1 female 38.0 First
2 1 female 26.0 Third
3 1 female 35.0 First
4 0 male 35.0 Third
Modin spreads work across CPU cores, which means you will get better performance without having to do anything extra.
# 5. Dask
How do you handle big data without increasing RAM? Dask is a great choice when you have files that are bigger in size than your computer’s random access memory (RAM). It uses lazy evaluation, so it does not load the entire dataset into memory. This helps you process millions of rows smoothly. Install it with:
pip install dask[complete]
To try it out, we can use the Chicago Crime dataset, as follows:
import dask.dataframe as dd
import urllib.request
url = "https://data.cityofchicago.org/api/views/ijzp-q8t2/rows.csv?accessType=DOWNLOAD"
local_file = "chicago_crime.csv"
urllib.request.urlretrieve(url, local_file)
# Read CSV with Dask (lazy evaluation)
df = dd.read_csv(local_file, dtype=str) # all columns as string
# Filter crimes classified as 'THEFT'
thefts = df[df['Primary Type'] == 'THEFT']
# Select a few relevant columns
thefts_small = thefts[["ID", "Date", "Primary Type", "Description", "District"]]
print(thefts_small.head())
Output:
ID Date Primary Type Description District
5 13204489 09/06/2023 11:00:00 AM THEFT OVER $500 001
50 13179181 08/17/2023 03:15:00 PM THEFT RETAIL THEFT 014
51 13179344 08/17/2023 07:25:00 PM THEFT RETAIL THEFT 014
53 13181885 08/20/2023 06:00:00 AM THEFT $500 AND UNDER 025
56 13184491 08/22/2023 11:44:00 AM THEFT RETAIL THEFT 014
Filtering (Primary Type == 'THEFT') and selecting columns are lazy operations. Filtering happens instantly because Dask processes data in chunks rather than loading everything at once.
# Conclusion
We covered five alternatives to pandas and how to use them. The article keeps things simple and focused. Check the official documentation for each library for full details:
If you run into any issues, leave a comment and I’ll help.
Kanwal Mehreen is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.

