大多數(shù)數(shù)據(jù)工作者都學(xué)過SQL,卻沒有學(xué)過Pandas,本文的目標(biāo)是讓熟悉SQL語法的朋友能夠快速在pandas上使用同樣思維的等價(jià)方法。
下面測試的過程中,數(shù)據(jù)庫中存在下面三張表,數(shù)據(jù)庫版本為MySQL 8.0.19:
(上面使用的數(shù)據(jù)庫可視化工具為SQLyog)
本文涉及的庫,可以使用pip安裝:
pip install sqlalchemy
pip install pandas -U
pip install pandasql
數(shù)據(jù)來源:
tips.csv:https://raw.github.com/pandas-dev/pandas/master/pandas/tests/io/data/csv/tips.csv
meat.csv和births.csv:
from pandasql import load_meat, load_births
load_meat().to_csv("meat.csv", index=False)
load_births().to_csv("births.csv", index=False)
想把數(shù)據(jù)拿到pandas中處理,最直接的辦法就是,直連數(shù)據(jù)庫,直接用現(xiàn)成的sql語句從數(shù)據(jù)庫取數(shù),這樣Pandas拿到的數(shù)據(jù)本身就已經(jīng)是在MySQL中處理好的。
獲取數(shù)據(jù)庫連接(根據(jù)數(shù)據(jù)庫實(shí)際情況修改):
from sqlalchemy import create_engine
import pandas as pd
host = 'localhost'
database = 'pandas'
user_name = 'root'
password = '123456'
port = 3306
engine = create_engine(
f'mysql+pymysql://{user_name}:{password}@{host}:{port}/{database}')
讀取數(shù)據(jù):
query = "SELECT total_bill, tip, smoker, time FROM tips limit 5"
df = pd.read_sql(query, engine)
df
total_bill | tip | smoker | time | |
---|---|---|---|---|
0 | 16.99 | 1.01 | No | Dinner |
1 | 10.34 | 1.66 | No | Dinner |
2 | 21.01 | 3.50 | No | Dinner |
3 | 23.68 | 3.31 | No | Dinner |
4 | 24.59 | 3.61 | No | Dinner |
對于被pandas處理過的數(shù)據(jù),我們還可以回寫到MySQL數(shù)據(jù)庫指定的表中:
df.to_sql(name='test', con=engine, if_exists='append', index=False)
name參數(shù)表示表名,if_exists表示表已經(jīng)存在時(shí)的處理策略,包括’fail’, 'replace’, 'append’三個(gè)參數(shù):
使用replace參數(shù)時(shí)需要注意,pandas創(chuàng)建的表,數(shù)據(jù)類型可能不符合預(yù)期。append則會按照原有數(shù)據(jù)類型追加。
對于上表,我們需要pandas能創(chuàng)建指定類型的表時(shí),可以從sqlalchemy.types導(dǎo)入相應(yīng)的類型:
from sqlalchemy.types import CHAR
df.to_sql(name='test', con=engine, if_exists='replace',
index=False, dtype={"smoker": CHAR(3), "time": CHAR(6)})
這樣對于這兩列,pandas創(chuàng)建的表將不再是text長文本類型。
有時(shí)我們寫入多個(gè)表時(shí),需要開啟數(shù)據(jù)庫事務(wù),可以使用engine.begin()開啟一個(gè)帶事務(wù)的連接,關(guān)閉連接事務(wù)將自動提交:
with engine.begin() as conn:
df.to_sql(name='test', con=engine, if_exists='append', index=False)
下面我們再繼續(xù)看假如我們的數(shù)據(jù)不在MySQL數(shù)據(jù)庫上,而是直接在本地文件時(shí)該如何處理呢?
標(biāo)準(zhǔn)的SQL查詢語法如下:
select (distinct) [字段]
from [表1] join [表2] on [匹配字段]
where [過濾條件]
group by [字段]
having [過濾條件]
order by [字段] desc
limit [個(gè)數(shù)] offset [個(gè)數(shù)]
下面將給出每種語法對應(yīng)的pandas語法。
首先,讓Pandas讀取數(shù)據(jù):
import pandas as pd
tips = pd.read_csv("tips.csv")
meat = pd.read_csv("meat.csv")
births = pd.read_csv("births.csv")
比如在MySQL中查詢指定的列:
SELECT total_bill, tip, smoker, time
FROM tips
LIMIT 5;
對于這種與sqlite相兼容的sql語法,我們完全可以直接使用pandassql做到無縫對接。
首先我們對pandasql提供的sqldf進(jìn)行柯里化(currying),將需要反復(fù)傳入的globals()封裝起來。
from pandasql import sqldf
def pysqldf(q): return sqldf(q, globals())
這樣我們就得到一個(gè)傳入sql語句直接獲取結(jié)果的pysqldf函數(shù),由于我們封裝了globals()全局變量空間,所有Datafream類型的變量名都可以被注冊為sqlite內(nèi)存表的表名被查詢。
這樣我們就可以直接對Pandas對象使用sql操作:
query = """
SELECT total_bill, tip, smoker, time
FROM tips
LIMIT 5;"""
pysqldf(query)
但pandasql的核心原理是通過sqlalchemy創(chuàng)建sqlite的內(nèi)存表,查詢時(shí)先將對應(yīng)的數(shù)據(jù)寫入到sqlite的內(nèi)存表中,再調(diào)用sql語句查詢。多少有點(diǎn)浪費(fèi)性能,我們還是也順便學(xué)點(diǎn)基本的Pandas操作更佳。
Pandas的操作為:
tips[["total_bill", "tip", "smoker", "time"]].head(5)
假如存在新列創(chuàng)建時(shí):
SELECT *, tip/total_bill as tip_rate
FROM tips
LIMIT 5;
pandas基本API操作為:
tips.assign(tip_rate=tips["tip"] / tips["total_bill"]).head(5)
對于基本的四則運(yùn)算產(chǎn)生的新列還可以使用eval函數(shù)來創(chuàng)建:
tips.eval("tip_rate=tip/total_bill").head(5)
結(jié)果都與上面相同。
假設(shè)我們想查詢指定列去重后的結(jié)果:
SELECT DISTINCT smoker, TIME FROM tips;
對應(yīng)pandas操作為:
tips[["smoker", "time"]].drop_duplicates()
對于sql語句:
SELECT *
FROM tips
WHERE time = 'Dinner' AND tip > 5.00;
對應(yīng)pandas操作為:
tips[(tips["time"] == "Dinner") & (tips["tip"] > 5.00)]
這種全列查詢的where過濾,使用query函數(shù)會更簡單:
tips.query("time=='Dinner' & tip>5")
結(jié)果為:
對于sql語句:
SELECT *
FROM tips
WHERE size >= 5 OR total_bill > 45;
對應(yīng)pandas基礎(chǔ)API操作為:
tips[(tips["size"] >= 5) | (tips["total_bill"] > 45)]
query函數(shù):
tips.query("size>=5 | total_bill>45")
結(jié)果均為:
如果我們在過濾同時(shí)還需篩選指定列時(shí),比如對于sql語句:
SELECT DAY, tip, sex
FROM tips
WHERE TIME = 'Dinner'
LIMIT 5;
直接使用loc函數(shù)會簡單很多:
tips.loc[tips["time"] == "Dinner", ["day", "tip", "sex"]].head(5)
假如我們要查詢某列為空的行:
SELECT *
FROM meat
WHERE broilers IS NULL LIMIT 5;
對應(yīng)pandas操作為:
meat[meat.broilers.isna()].head(5)
假如要查詢某列不為空的行:
SELECT *
FROM meat
WHERE broilers IS NOT NULL LIMIT 5;
對應(yīng)pandas操作為:
meat[meat.broilers.notna()].head(5)
SELECT sex, count(*)
FROM tips
GROUP BY sex;
對應(yīng)pandas操作:
tips.groupby("sex").size()
SELECT day, AVG(tip), COUNT(*)
FROM tips
GROUP BY day;
對應(yīng)pandas操作:
tips.groupby("day").agg({"tip": 'mean', "day": 'count'})
SELECT smoker, day, COUNT(*), AVG(tip)
FROM tips
GROUP BY smoker, day;
對應(yīng)pandas操作:
tips.groupby(["smoker", "day"])["tip"].agg(
[("tip_count", "count"), ("tip_avg", "mean")])
按計(jì)算過的字段分組:
SELECT
YEAR(DATE) AS YEAR,
SUM(beef) AS beef_total
FROM
meat
GROUP BY YEAR ;
對應(yīng)pandas操作:
(meat.groupby(pd.to_datetime(meat.date).dt.year)['beef']
.agg([('beef_total', 'sum')])
.rename_axis(index='year')
.reset_index()
.head(5)
)
分組過過濾出平均小費(fèi)小于3的數(shù)據(jù):
SELECT DAY, AVG(tip) avg_tip, COUNT(*)
FROM tips
GROUP BY DAY
HAVING avg_tip<3;
只需對分組過的查詢結(jié)果繼續(xù)進(jìn)行query過濾即可:
tips.groupby("day").agg({"tip": [('avg_tip', 'mean')], "day": 'count'}).droplevel(0, axis=1).query("avg_tip<3")
sql:
SELECT
m.date, m.beef, b.births
FROM
meat m
INNER JOIN births b
ON m.date = b.date ;
對應(yīng)Pandas操作:
pd.merge(meat, births, on="date")[["date", "beef", "births"]]
當(dāng)然還有左連接和右連接:
SELECT
m.date, m.beef, b.births
FROM
meat m
LEFT JOIN births b
ON m.date = b.date ;
SELECT
m.date, m.beef, b.births
FROM
meat m
RIGHT JOIN births b
ON m.date = b.date ;
對應(yīng)Pandas操作:
pd.merge(meat, births, how="left", on="date")[["date", "beef", "births"]]
pd.merge(meat, births, how="right", on="date")[["date", "beef", "births"]]
sql:
SELECT DISTINCT TIME, size FROM tips WHERE smoker = 'No'
UNION ALL
SELECT DISTINCT TIME, size FROM tips WHERE smoker = 'Yes' ;
在Pandas對應(yīng)操作:
df1 = tips.loc[tips.smoker == 'No', ["time", "size"]].drop_duplicates()
df2 = tips.loc[tips.smoker == 'Yes', ["time", "size"]].drop_duplicates()
pd.concat([df1, df2])
如果是UNION:
SELECT DISTINCT TIME, size FROM tips WHERE smoker = 'No'
UNION
SELECT DISTINCT TIME, size FROM tips WHERE smoker = 'Yes' ;
sql語句對合并的結(jié)果還會進(jìn)行去重操作,在pandas中也只需要合并再去重即可:
pd.concat([df1, df2]).drop_duplicates()
對于sql:
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
或:
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 5,10;
表示按小費(fèi)tip倒序排序,從第6條記錄開始取10條記錄。
Pandas操作對應(yīng)操作
方法1:
tips.sort_values('tip', ascending=False).head(10+5).tail(10)
方法2:
tips.nlargest(5+10, 'tip').tail(10)
方法3:
tips.sort_values('tip', ascending=False, ignore_index=True).loc[5:5+10-1]
參考資料:
https://pypi.org/project/pandasql/
https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_sql.html
聯(lián)系客服