Pandas 10分钟入门(官方说明+个人小测试)_pandas in 10minutes-程序员宅基地

技术标签: pandas  python  

Pandas 10分钟入门


代码下载地址[http://download.csdn.net/download/sirwill/10043185]

In [19]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Object Creation

In [20]:
s= pd.Series([1,2,3,np.nan,5,6,])   #series 类型数组。
s
Out[20]:
0    1.0
1    2.0
2    3.0
3    NaN
4    5.0
5    6.0
dtype: float64
In [21]:
dates= pd.date_range("20170112",periods=6) #Creating a DataFrame by passing a numpy array, with a datetime index and labeled column
dates
Out[21]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [22]:
list(dates)
dates.date
Out[22]:
array([datetime.date(2017, 1, 12), datetime.date(2017, 1, 13),
       datetime.date(2017, 1, 14), datetime.date(2017, 1, 15),
       datetime.date(2017, 1, 16), datetime.date(2017, 1, 17)], dtype=object)
In [23]:
list(dates.date)
Out[23]:
[datetime.date(2017, 1, 12),
 datetime.date(2017, 1, 13),
 datetime.date(2017, 1, 14),
 datetime.date(2017, 1, 15),
 datetime.date(2017, 1, 16),
 datetime.date(2017, 1, 17)]
In [24]:
dates.year
Out[24]:
Int64Index([2017, 2017, 2017, 2017, 2017, 2017], dtype='int64')
In [25]:
list(dates.year)
Out[25]:
[2017, 2017, 2017, 2017, 2017, 2017]
In [26]:
list(dates.day)
Out[26]:
[12, 13, 14, 15, 16, 17]
In [27]:
str(dates.date)
Out[27]:
'[datetime.date(2017, 1, 12) datetime.date(2017, 1, 13)\n datetime.date(2017, 1, 14) datetime.date(2017, 1, 15)\n datetime.date(2017, 1, 16) datetime.date(2017, 1, 17)]'
In [28]:
df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=list("ABCD"))
df
Out[28]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [29]:
df2 = pd.DataFrame({
         'A' : 1.,
                    'B' : pd.Timestamp('20130102'),
                    'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
                    'D' : np.array([3] * 4,dtype='int32'),
                    'E' : pd.Categorical(["test","train","test","train"]),
                    'F' : 'foo' }) #Creating a DataFrame by passing a dict of objects that can be converted to series-like.
df2
Out[29]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [30]:
df2.dtypes
Out[30]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object
In [31]:
df.dtypes
Out[31]:
A    float64
B    float64
C    float64
D    float64
dtype: object
In [32]:
df2.<TAB> #使用jupyter时按tab键,可以看到代码提示。
  File "<ipython-input-32-9c4c8dafe199>", line 1
    df2.<TAB> #If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
        ^
SyntaxError: invalid syntax

Viewing Data

In [36]:
df.head()
Out[36]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [37]:
df.index
Out[37]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [38]:
df.columns
Out[38]:
Index(['A', 'B', 'C', 'D'], dtype='object')
In [39]:
df.values
Out[39]:
array([[-2.2581213 ,  2.45619592,  0.77856734, -2.030407  ],
       [-0.65834822,  0.62249451,  0.38862467,  0.07358728],
       [ 0.58921899,  1.39279193,  0.60554535,  1.23153815],
       [-0.1519579 , -0.65524863, -2.1147252 , -0.66983949],
       [-1.32330447,  3.14365936,  0.63899562,  0.89868346],
       [-0.02493461,  0.3858107 , -1.57718486, -0.0214603 ]])
In [40]:
df.describe()
Out[40]:
  A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.637908 1.224284 -0.213363 -0.086316
std 1.021078 1.401987 1.282079 1.171045
min -2.258121 -0.655249 -2.114725 -2.030407
25% -1.157065 0.444982 -1.085732 -0.507745
50% -0.405153 1.007643 0.497085 0.026063
75% -0.056690 2.190345 0.630633 0.692409
max 0.589219 3.143659 0.778567 1.231538
In [41]:
df
Out[41]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [42]:
df.T
Out[42]:
  2017-01-12 00:00:00 2017-01-13 00:00:00 2017-01-14 00:00:00 2017-01-15 00:00:00 2017-01-16 00:00:00 2017-01-17 00:00:00
A -2.258121 -0.658348 0.589219 -0.151958 -1.323304 -0.024935
B 2.456196 0.622495 1.392792 -0.655249 3.143659 0.385811
C 0.778567 0.388625 0.605545 -2.114725 0.638996 -1.577185
D -2.030407 0.073587 1.231538 -0.669839 0.898683 -0.021460
In [43]:
df.sort_index(axis=1,ascending=False) #Sorting by an axis  排序。
Out[43]:
  D C B A
2017-01-12 -2.030407 0.778567 2.456196 -2.258121
2017-01-13 0.073587 0.388625 0.622495 -0.658348
2017-01-14 1.231538 0.605545 1.392792 0.589219
2017-01-15 -0.669839 -2.114725 -0.655249 -0.151958
2017-01-16 0.898683 0.638996 3.143659 -1.323304
2017-01-17 -0.021460 -1.577185 0.385811 -0.024935
In [44]:
df.sort_values(by="B") #Sorting by values
Out[44]:
  A B C D
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [45]:
df
Out[45]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection

Getting

In [46]:
df["A"]# Selecting a single column, which yields a Series, equivalent to df.A
Out[46]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [47]:
df.A
Out[47]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [48]:
df[0:3]  #Selecting via [], which slices the rows.
Out[48]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [49]:
df["2017-01-13":"2017-01-17"]
Out[49]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection by Label

In [50]:
dates
Out[50]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [51]:
df.loc[dates[0]] #For getting a cross section using a label
Out[51]:
A   -2.258121
B    2.456196
C    0.778567
D   -2.030407
Name: 2017-01-12 00:00:00, dtype: float64
In [52]:
df.loc[:,["A","B"]]
Out[52]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
2017-01-17 -0.024935 0.385811
In [53]:
df.loc['20170112':'20170116',['A','B']] #Showing label slicing, both endpoints are included
Out[53]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [54]:
df.loc["20170115",["A","B"]] 
Out[54]:
A   -0.151958
B   -0.655249
Name: 2017-01-15 00:00:00, dtype: float64
In [55]:
df.loc[dates[3],"D"] #For getting a scalar value
Out[55]:
-0.6698394854437093
In [56]:
df.at[dates[3],"D"] #For getting fast access to a scalar (equiv to the prior method)
Out[56]:
-0.6698394854437093

Selection by Position

In [57]:
df.iloc[3] #Select via the position of the passed integers
Out[57]:
A   -0.151958
B   -0.655249
C   -2.114725
D   -0.669839
Name: 2017-01-15 00:00:00, dtype: float64
In [58]:
df.iloc[2:5,0:2] # By integer slices, acting similar to numpy/python
Out[58]:
  A B
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [59]:
df.iloc[[1,3,4],[0,2]] #By lists of integer position locations, similar to the numpy/python style
Out[59]:
  A C
2017-01-13 -0.658348 0.388625
2017-01-15 -0.151958 -2.114725
2017-01-16 -1.323304 0.638996
In [60]:
df.iloc[1:3,:]
Out[60]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [61]:
df.iloc[:,1:3]
Out[61]:
  B C
2017-01-12 2.456196 0.778567
2017-01-13 0.622495 0.388625
2017-01-14 1.392792 0.605545
2017-01-15 -0.655249 -2.114725
2017-01-16 3.143659 0.638996
2017-01-17 0.385811 -1.577185
In [62]:
df.iloc[1,1] #For getting a value explicitly
Out[62]:
0.62249451281708756
In [63]:
df.iat[1,1] #For getting fast access to a scalar (equiv to the prior method)
Out[63]:
0.62249451281708756

Boolean Indexing

In [64]:
df[df.A>0] #Using a single column’s values to select data
Out[64]:
  A B C D
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [65]:
df[df>0] #Selecting values from a DataFrame where a boolean condition is met
Out[65]:
  A B C D
2017-01-12 NaN 2.456196 0.778567 NaN
2017-01-13 NaN 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 NaN NaN NaN NaN
2017-01-16 NaN 3.143659 0.638996 0.898683
2017-01-17 NaN 0.385811 NaN NaN
In [66]:
df2
Out[66]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [67]:
df
Out[67]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [68]:
df2=df.copy()
df2
Out[68]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [69]:
df.equals(df2)
Out[69]:
True
In [70]:
df==df2
Out[70]:
  A B C D
2017-01-12 True True True True
2017-01-13 True True True True
2017-01-14 True True True True
2017-01-15 True True True True
2017-01-16 True True True True
2017-01-17 True True True True
In [71]:
df is df2
Out[71]:
False
In [72]:
df2["E"]=["one","one","two","three","four","three"]
df2
Out[72]:
  A B C D E
2017-01-12 -2.258121 2.456196 0.778567 -2.030407 one
2017-01-13 -0.658348 0.622495 0.388625 0.073587 one
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839 three
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460 three
In [73]:
df2[df2.E.isin(["two","four"])]
Out[73]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
In [74]:
df2[df2["E"].isin(["two","four"])]
Out[74]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four

Setting

In [75]:
s1= pd.Series([1,2,3,4,5,6],index=pd.date_range("20171016",periods=6)) #Setting a new column automatically aligns the data by the indexes
s1
Out[75]:
2017-10-16    1
2017-10-17    2
2017-10-18    3
2017-10-19    4
2017-10-20    5
2017-10-21    6
Freq: D, dtype: int64
In [76]:
df.at[dates[0],"A"]=0 #Setting values by label
In [77]:
df
Out[77]:
  A B C D
2017-01-12 0.000000 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [78]:
df.iat[0,1]=0
df
Out[78]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [79]:
df.loc[:,"D"]=np.array([5]*len(df)) #Setting by assigning with a numpy array
df
Out[79]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [80]:
df2=df.copy()
df2
Out[80]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [81]:
df2[df2>0]=-df2
df2
Out[81]:
  A B C D
2017-01-12 0.000000 0.000000 -0.778567 -5
2017-01-13 -0.658348 -0.622495 -0.388625 -5
2017-01-14 -0.589219 -1.392792 -0.605545 -5
2017-01-15 -0.151958 -0.655249 -2.114725 -5
2017-01-16 -1.323304 -3.143659 -0.638996 -5
2017-01-17 -0.024935 -0.385811 -1.577185 -5

Missing Data

In [83]:
df
Out[83]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [84]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1
Out[84]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [85]:
df1.dropna(how="any") #To drop any rows that have missing data
Out[85]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
In [86]:
df1.fillna(value=5)  # Filling missing data
Out[86]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 5.0
2017-01-15 -0.151958 -0.655249 -2.114725 5 5.0
In [87]:
df1
Out[87]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [88]:
pd.isnull(df1)
Out[88]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [89]:
df1.isnull()
Out[89]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [90]:
df1.isna()  #没有这个方法~~
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-90-9dd6d031e095> in <module>()
----> 1 df1.isna()  #没有这个方法~~

D:\Users\asus\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
   2968             if name in self._info_axis:
   2969                 return self[name]
-> 2970             return object.__getattribute__(self, name)
   2971 
   2972     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'isna'

Options

Stats

Operations in general exclude missing data. Performing a descriptive statistic

In [91]:
df
Out[91]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [92]:
df.mean()
Out[92]:
A   -0.261554
B    0.814918
C   -0.213363
D    5.000000
dtype: float64
In [93]:
df.mean(1)  #Same operation on the other axis
Out[93]:
2017-01-12    1.444642
2017-01-13    1.338193
2017-01-14    1.896889
2017-01-15    0.519517
2017-01-16    1.864838
2017-01-17    0.945923
Freq: D, dtype: float64
In [94]:
s= pd.Series([1,2,3,np.nan,4,5],index=dates).shift(2) 
# Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
s
Out[94]:
2017-01-12    NaN
2017-01-13    NaN
2017-01-14    1.0
2017-01-15    2.0
2017-01-16    3.0
2017-01-17    NaN
Freq: D, dtype: float64
In [95]:
df
Out[95]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [96]:
df.sub(s,axis="index")  #dataFrame与series的减法
Out[96]:
  A B C D
2017-01-12 NaN NaN NaN NaN
2017-01-13 NaN NaN NaN NaN
2017-01-14 -0.410781 0.392792 -0.394455 4.0
2017-01-15 -2.151958 -2.655249 -4.114725 3.0
2017-01-16 -4.323304 0.143659 -2.361004 2.0
2017-01-17 NaN NaN NaN NaN

Apply

In [97]:
df
Out[97]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [98]:
df.apply(np.cumsum)  #行叠加。
Out[98]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 1.167192 10
2017-01-14 -0.069129 2.015286 1.772737 15
2017-01-15 -0.221087 1.360038 -0.341988 20
2017-01-16 -1.544392 4.503697 0.297008 25
2017-01-17 -1.569326 4.889508 -1.280177 30
In [99]:
df.apply(lambda x: x.max()-x.min())
Out[99]:
A    1.912523
B    3.798908
C    2.893293
D    0.000000
dtype: float64

Histogramming

In [100]:
s= pd.Series(np.random.randint(0,7,size=10))
s
Out[100]:
0    4
1    5
2    2
3    0
4    5
5    3
6    4
7    3
8    3
9    0
dtype: int32
In [101]:
s.value_counts()
Out[101]:
3    3
5    2
4    2
0    2
2    1
dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [102]:
s= pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
Out[102]:
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object
In [103]:
s
Out[103]:
0       A
1       B
2       C
3    Aaba
4    Baca
5     NaN
6    CABA
7     dog
8     cat
dtype: object

Merge 合并

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. See the Merging section Concatenating pandas objects together with concat():

In [104]:
df
Out[104]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [105]:
df=pd.DataFrame(np.random.randn(10,4))
df
Out[105]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [106]:
# break it into pieces
pieces=[df[:3],df[3:7],df[7:]]
pd.concat(pieces)
Out[106]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [107]:
pieces
Out[107]:
[          0         1         2         3
 0  0.111766 -0.505125  2.156029  0.419152
 1  1.068870  1.180587  0.361345  1.090554
 2  0.488997  0.281507 -0.738345 -0.242974,
           0         1         2         3
 3 -1.846709  1.686173 -0.202319 -1.151983
 4  0.573012 -1.979189  1.544768  1.594595
 5 -0.954571 -0.696788  0.270959 -2.296720
 6 -1.511946  1.796113  0.399493  0.412664,
           0         1         2         3
 7  0.089844 -0.545153 -0.315653 -0.235828
 8 -0.747140  1.222900 -1.650812  0.292432
 9  0.659855  0.501265  0.363978  1.722914]

Join

SQL style merges. See the Database style joining

In [108]:
left=pd.DataFrame({
        "key":["foo","foo"],"lval":[1,2]})
right = pd.DataFrame({
        'key': ['foo', 'foo'], 'rval': [4, 5]})
In [109]:
left
Out[109]:
  key lval
0 foo 1
1 foo 2
In [110]:
right
Out[110]:
  key rval
0 foo 4
1 foo 5
In [111]:
pd.merge(left,right,on="key")
Out[111]:
  key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5
In [112]:
left = pd.DataFrame({
        'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({
        'key': ['foo', 'bar'], 'rval': [4, 5]})
In [113]:
left
Out[113]:
  key lval
0 foo 1
1 bar 2
In [114]:
right
Out[114]:
  key rval
0 foo 4
1 bar 5
In [115]:
pd.merge(left,right,on="key")
Out[115]:
  key lval rval
0 foo 1 4
1 bar 2 5

Append

In [116]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
Out[116]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
In [117]:
s=df.iloc[3]
s
Out[117]:
A    0.655751
B    1.235660
C    1.729038
D    1.074948
Name: 3, dtype: float64
In [118]:
df.append(s,ignore_index=True)
Out[118]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
8 0.655751 1.235660 1.729038 1.074948

Grouping

By “group by” we are referring to a process involving one or more of the following steps • Splitting the data into groups based on some criteria • Applying a function to each group independently • Combining the results into a data structure

In [119]:
df = pd.DataFrame({
        'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                          'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})
df
Out[119]:
  A B C D
0 foo one -0.523738 -1.363519
1 bar one -0.071920 -2.618027
2 foo two -2.712421 -0.407372
3 bar three -0.635898 -1.942854
4 foo two 0.952073 -0.546110
5 bar two 1.474296 -0.982238
6 foo one -0.529788 -0.213397
7 foo three 0.877394 -0.791663
In [120]:
df.groupby("A").sum()
Out[120]:
  C D
A    
bar 0.766479 -5.543120
foo -1.936480 -3.322062
In [121]:
df.groupby(["A","B"]).sum()  #Grouping by multiple columns forms a hierarchical index, which we then apply the function.
Out[121]:
    C D
A B    
bar one -0.071920 -2.618027
three -0.635898 -1.942854
two 1.474296 -0.982238
foo one -1.053527 -1.576917
three 0.877394 -0.791663
two -1.760347 -0.953482

Reshaping

Stack

In [122]:
tuples = list(zip([['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[122]:
[(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],),
 (['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],)]
In [123]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[123]:
[('bar', 'one'),
 ('bar', 'two'),
 ('baz', 'one'),
 ('baz', 'two'),
 ('foo', 'one'),
 ('foo', 'two'),
 ('qux', 'one'),
 ('qux', 'two')]
In [124]:
index=pd.MultiIndex.from_tuples(tuples,names=["first","second"])
index
Out[124]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
           labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
           names=['first', 'second'])
In [125]:
df=pd.DataFrame(np.random.randn(8,2),index=index,columns=['A', 'B'])
df
Out[125]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
foo one -0.256181 1.260131
two -0.994720 0.506272
qux one -0.422649 0.191402
two -0.102085 0.975210
In [126]:
df2=df[:4]
df2
Out[126]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [127]:
stacked= df2.stack()
stacked
Out[127]:
first  second   
bar    one     A   -1.101051
               B   -1.126231
       two     A   -0.395652
               B   -0.313567
baz    one     A    1.378579
               B   -1.637869
       two     A    0.665960
               B   -0.259749
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

In [128]:
stacked.unstack()
Out[128]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [129]:
stacked.unstack(1)
Out[129]:
  second one two
first      
bar A -1.101051 -0.395652
B -1.126231 -0.313567
baz A 1.378579 0.665960
B -1.637869 -0.259749
In [130]:
stacked.unstack(0)
Out[130]:
  first bar baz
second      
one A -1.101051 1.378579
B -1.126231 -1.637869
two A -0.395652 0.665960
B -0.313567 -0.259749

Pivot Tables

In [131]:
df = pd.DataFrame({
        'A' : ['one', 'one', 'two', 'three'] * 3,
                   'B' : ['A', 'B', 'C'] * 4,
                   'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
                   'D' : np.random.randn(12),
                   'E' : np.random.randn(12)})
df
Out[131]:
  A B C D E
0 one A foo 0.039230 0.134261
1 one B foo 0.952890 -0.499183
2 two C foo -0.778814 -0.655735
3 three A bar 0.798864 0.025109
4 one B bar -0.580050 -1.711672
5 one C bar 0.004300 -0.433591
6 two A foo 0.229248 -2.648814
7 three B foo 0.506488 0.630373
8 one C foo -0.315667 0.031764
9 one A bar -1.547410 0.743825
10 two B bar -0.480958 0.365255
11 three C bar 1.742948 0.692884
In [4]:
pd.pivot_table(df,values="D",index=["A","B"],columns=["C"])
Out[4]:
  C bar foo
A B    
one A 0.932814 -1.440079
B 0.060252 1.071877
C 2.879779 0.355274
three A -0.328442 NaN
B NaN -2.544812
C -1.879058 NaN
two A NaN -1.987377
B 0.220517 NaN
C NaN -0.082820

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.

In [132]:
rng=pd.date_range("1/2/2017",periods=100,freq="S")
rng
Out[132]:
DatetimeIndex(['2017-01-02 00:00:00', '2017-01-02 00:00:01',
               '2017-01-02 00:00:02', '2017-01-02 00:00:03',
               '2017-01-02 00:00:04', '2017-01-02 00:00:05',
               '2017-01-02 00:00:06', '2017-01-02 00:00:07',
               '2017-01-02 00:00:08', '2017-01-02 00:00:09',
               '2017-01-02 00:00:10', '2017-01-02 00:00:11',
               '2017-01-02 00:00:12', '2017-01-02 00:00:13',
               '2017-01-02 00:00:14', '2017-01-02 00:00:15',
               '2017-01-02 00:00:16', '2017-01-02 00:00:17',
               '2017-01-02 00:00:18', '2017-01-02 00:00:19',
               '2017-01-02 00:00:20', '2017-01-02 00:00:21',
               '2017-01-02 00:00:22', '2017-01-02 00:00:23',
               '2017-01-02 00:00:24', '2017-01-02 00:00:25',
               '2017-01-02 00:00:26', '2017-01-02 00:00:27',
               '2017-01-02 00:00:28', '2017-01-02 00:00:29',
               '2017-01-02 00:00:30', '2017-01-02 00:00:31',
               '2017-01-02 00:00:32', '2017-01-02 00:00:33',
               '2017-01-02 00:00:34', '2017-01-02 00:00:35',
               '2017-01-02 00:00:36', '2017-01-02 00:00:37',
               '2017-01-02 00:00:38', '2017-01-02 00:00:39',
               '2017-01-02 00:00:40', '2017-01-02 00:00:41',
               '2017-01-02 00:00:42', '2017-01-02 00:00:43',
               '2017-01-02 00:00:44', '2017-01-02 00:00:45',
               '2017-01-02 00:00:46', '2017-01-02 00:00:47',
               '2017-01-02 00:00:48', '2017-01-02 00:00:49',
               '2017-01-02 00:00:50', '2017-01-02 00:00:51',
               '2017-01-02 00:00:52', '2017-01-02 00:00:53',
               '2017-01-02 00:00:54', '2017-01-02 00:00:55',
               '2017-01-02 00:00:56', '2017-01-02 00:00:57',
               '2017-01-02 00:00:58', '2017-01-02 00:00:59',
               '2017-01-02 00:01:00', '2017-01-02 00:01:01',
               '2017-01-02 00:01:02', '2017-01-02 00:01:03',
               '2017-01-02 00:01:04', '2017-01-02 00:01:05',
               '2017-01-02 00:01:06', '2017-01-02 00:01:07',
               '2017-01-02 00:01:08', '2017-01-02 00:01:09',
               '2017-01-02 00:01:10', '2017-01-02 00:01:11',
               '2017-01-02 00:01:12', '2017-01-02 00:01:13',
               '2017-01-02 00:01:14', '2017-01-02 00:01:15',
               '2017-01-02 00:01:16', '2017-01-02 00:01:17',
               '2017-01-02 00:01:18', '2017-01-02 00:01:19',
               '2017-01-02 00:01:20', '2017-01-02 00:01:21',
               '2017-01-02 00:01:22', '2017-01-02 00:01:23',
               '2017-01-02 00:01:24', '2017-01-02 00:01:25',
               '2017-01-02 00:01:26', '2017-01-02 00:01:27',
               '2017-01-02 00:01:28', '2017-01-02 00:01:29',
               '2017-01-02 00:01:30', '2017-01-02 00:01:31',
               '2017-01-02 00:01:32', '2017-01-02 00:01:33',
               '2017-01-02 00:01:34', '2017-01-02 00:01:35',
               '2017-01-02 00:01:36', '2017-01-02 00:01:37',
               '2017-01-02 00:01:38', '2017-01-02 00:01:39'],
              dtype='datetime64[ns]', freq='S')
In [133]:
ts =pd.Series(np.random.randint(0,500,len(rng)),index=rng)
ts
Out[133]:
2017-01-02 00:00:00    251
2017-01-02 00:00:01     63
2017-01-02 00:00:02    108
2017-01-02 00:00:03    288
2017-01-02 00:00:04    491
2017-01-02 00:00:05    490
2017-01-02 00:00:06    343
2017-01-02 00:00:07    357
2017-01-02 00:00:08     72
2017-01-02 00:00:09    171
2017-01-02 00:00:10    324
2017-01-02 00:00:11    281
2017-01-02 00:00:12    176
2017-01-02 00:00:13     14
2017-01-02 00:00:14    495
2017-01-02 00:00:15    150
2017-01-02 00:00:16     69
2017-01-02 00:00:17    144
2017-01-02 00:00:18    126
2017-01-02 00:00:19    368
2017-01-02 00:00:20    129
2017-01-02 00:00:21    386
2017-01-02 00:00:22    228
2017-01-02 00:00:23    458
2017-01-02 00:00:24     98
2017-01-02 00:00:25    244
2017-01-02 00:00:26    206
2017-01-02 00:00:27     98
2017-01-02 00:00:28     92
2017-01-02 00:00:29    259
                      ... 
2017-01-02 00:01:10    127
2017-01-02 00:01:11    342
2017-01-02 00:01:12    185
2017-01-02 00:01:13    123
2017-01-02 00:01:14     73
2017-01-02 00:01:15    132
2017-01-02 00:01:16    462
2017-01-02 00:01:17    317
2017-01-02 00:01:18    180
2017-01-02 00:01:19    247
2017-01-02 00:01:20     97
2017-01-02 00:01:21    401
2017-01-02 00:01:22    342
2017-01-02 00:01:23    382
2017-01-02 00:01:24    304
2017-01-02 00:01:25     47
2017-01-02 00:01:26    193
2017-01-02 00:01:27    334
2017-01-02 00:01:28    196
2017-01-02 00:01:29    297
2017-01-02 00:01:30    195
2017-01-02 00:01:31    236
2017-01-02 00:01:32    200
2017-01-02 00:01:33    490
2017-01-02 00:01:34    196
2017-01-02 00:01:35    201
2017-01-02 00:01:36    397
2017-01-02 00:01:37    494
2017-01-02 00:01:38    482
2017-01-02 00:01:39    267
Freq: S, Length: 100, dtype: int32
In [7]:
ts.resample("5Min").sum()
Out[7]:
2017-01-02    22939
Freq: 5T, dtype: int32
In [9]:
ts.resample("1Min").sum()
Out[9]:
2017-01-02 00:00:00    13896
2017-01-02 00:01:00     9043
Freq: T, dtype: int32

Time zone representation.零时区 UTC表示。

In [10]:
rng= pd.date_range("2/1/2017 00:00",periods=5,freq="D")
rng
Out[10]:
DatetimeIndex(['2017-02-01', '2017-02-02', '2017-02-03', '2017-02-04',
               '2017-02-05'],
              dtype='datetime64[ns]', freq='D')
In [12]:
ts=pd.Series(np.random.randn(len(rng)),index=rng)
ts
Out[12]:
2017-02-01    0.329594
2017-02-02    2.097319
2017-02-03    1.852023
2017-02-04   -0.213452
2017-02-05    0.160873
Freq: D, dtype: float64
In [13]:
tsUtc=ts.tz_localize("UTC")
tsUtc
Out[13]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Convert to another time zone.时区转换。

In [14]:
tsUtc.tz_convert("US/Eastern")
Out[14]:
2017-01-31 19:00:00-05:00    0.329594
2017-02-01 19:00:00-05:00    2.097319
2017-02-02 19:00:00-05:00    1.852023
2017-02-03 19:00:00-05:00   -0.213452
2017-02-04 19:00:00-05:00    0.160873
Freq: D, dtype: float64
In [15]:
tsUtc
Out[15]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Converting between time span representations

In [16]:
rng=pd.date_range("1/8/2017",periods=5,freq="M")
rng
Out[16]:
DatetimeIndex(['2017-01-31', '2017-02-28', '2017-03-31', '2017-04-30',
               '2017-05-31'],
              dtype='datetime64[ns]', freq='M')
In [18]:
ts=pd.Series(np.random.randn(len(rng)),rng)
ts
Out[18]:
2017-01-31    0.904523
2017-02-28   -0.470144
2017-03-31   -0.373244
2017-04-30    0.860448
2017-05-31    0.176226
Freq: M, dtype: float64
In [20]:
ps=ts.to_period()
ps
Out[20]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64
In [21]:
ps.to_timestamp()
Out[21]:
2017-01-01    0.904523
2017-02-01   -0.470144
2017-03-01   -0.373244
2017-04-01    0.860448
2017-05-01    0.176226
Freq: MS, dtype: float64
In [22]:
ps
Out[22]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

In [23]:
prng=pd.period_range("1990Q1","2017Q4",freq="Q-NOV")
prng
Out[23]:
PeriodIndex(['1990Q1', '1990Q2', '1990Q3', '1990Q4', '1991Q1', '1991Q2',
             '1991Q3', '1991Q4', '1992Q1', '1992Q2',
             ...
             '2015Q3', '2015Q4', '2016Q1', '2016Q2', '2016Q3', '2016Q4',
             '2017Q1', '2017Q2', '2017Q3', '2017Q4'],
            dtype='period[Q-NOV]', length=112, freq='Q-NOV')
In [25]:
ts= pd.Series(np.random.randn(len(prng)),prng)
ts.head()
Out[25]:
1990Q1    1.193031
1990Q2    0.621627
1990Q3   -0.235553
1990Q4    0.642938
1991Q1    0.247024
Freq: Q-NOV, dtype: float64
In [26]:
ts.index=(prng.asfreq("M","e")+1).asfreq("H","s")+9
ts.head()
Out[26]:
1990-03-01 09:00    1.193031
1990-06-01 09:00    0.621627
1990-09-01 09:00   -0.235553
1990-12-01 09:00    0.642938
1991-03-01 09:00    0.247024
Freq: H, dtype: float64

Categoricals

In [34]:
df = pd.DataFrame({
        "id":[1,2,3,4,5,6],"raw_grade":["a","a","c","b","b","f"]})
df
Out[34]:
  id raw_grade
0 1 a
1 2 a
2 3 c
3 4 b
4 5 b
5 6 f

Convert the raw grades to a categorical data type.

In [35]:
df["grade"]=df.raw_grade.astype("category")
df
Out[35]:
  id raw_grade grade
0 1 a a
1 2 a a
2 3 c c
3 4 b b
4 5 b b
5 6 f f
In [36]:
df.grade #Convert the raw grades to a categorical data type
Out[36]:
0    a
1    a
2    c
3    b
4    b
5    f
Name: grade, dtype: category
Categories (4, object): [a, b, c, f]
In [37]:
# Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)

df.grade.cat.categories=["very good","good","nomal","bad"]
df
Out[37]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c nomal
3 4 b good
4 5 b good
5 6 f bad
In [38]:
# Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).

df.grade=df.grade.cat.set_categories(["very bad", "bad", "medium","good", "very good"])
df.grade
Out[38]:
0    very good
1    very good
2          NaN
3         good
4         good
5          bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [39]:
df
Out[39]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c NaN
3 4 b good
4 5 b good
5 6 f bad

Sorting is per order in the categories, not lexical order

In [40]:
df.sort_values(by="grade")
Out[40]:
  id raw_grade grade
2 3 c NaN
5 6 f bad
3 4 b good
4 5 b good
0 1 a very good
1 2 a very good

Grouping by a categorical column shows also empty categories

In [41]:
df.groupby("grade").size()
Out[41]:
grade
very bad     0
bad          1
medium       0
good         2
very good    2
dtype: int64

Plotting

In [43]:
ts=pd.Series(np.random.randn(1000),index=pd.date_range("1/1/2017",periods=1000))
ts.head()
Out[43]:
2017-01-01   -0.745067
2017-01-02   -0.070895
2017-01-03    0.233542
2017-01-04   -0.206597
2017-01-05    0.891064
Freq: D, dtype: float64
In [45]:
ts=ts.cumsum()
ts.head()
Out[45]:
2017-01-01   -0.745067
2017-01-02   -1.561029
2017-01-03   -2.143449
2017-01-04   -2.932466
2017-01-05   -2.830418
Freq: D, dtype: float64
In [48]:
ts.plot()
Out[48]:
<matplotlib.axes._subplots.AxesSubplot at 0x19bf6a6e278>
In [50]:
df=pd.DataFrame(np.random.randn(1000,4),index=ts.index,columns=["A","B","C","D"])
df.head()
Out[50]:
  A B C D
2017-01-01 -1.940139 -0.476590 -0.154066 1.692812
2017-01-02 0.399891 0.268976 0.596209 -0.484979
2017-01-03 0.814519 -0.142193 -0.084394 -0.687342
2017-01-04 0.385848 -1.230059 -0.093327 -0.096652
2017-01-05 0.407435 -0.849347 0.379192 0.172933
In [51]:
df=df.cumsum()
In [53]:
plt.figure()
df.plot()
plt.legend(loc="best")
plt.show()
<matplotlib.figure.Figure at 0x19bf8855da0>
<matplotlib.figure.Figure at 0x19bf897dc88>

Getting Data In/Out

CSV

In [ ]:
df.to_csv("foo.csv")
In [ ]:
pd.read_csv("foo.csv")

HDF5

In [ ]:
df.to_hdf("foo.h5","df")
In [ ]:
pd.read_hdf("foo.h5","df")

Excel

In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')
In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/sirwill/article/details/78379813

智能推荐

从零开始搭建Hadoop_创建一个hadoop项目-程序员宅基地

文章浏览阅读331次。第一部分:准备工作1 安装虚拟机2 安装centos73 安装JDK以上三步是准备工作,至此已经完成一台已安装JDK的主机第二部分:准备3台虚拟机以下所有工作最好都在root权限下操作1 克隆上面已经有一台虚拟机了,现在对master进行克隆,克隆出另外2台子机;1.1 进行克隆21.2 下一步1.3 下一步1.4 下一步1.5 根据子机需要,命名和安装路径1.6 ..._创建一个hadoop项目

心脏滴血漏洞HeartBleed CVE-2014-0160深入代码层面的分析_heartbleed代码分析-程序员宅基地

文章浏览阅读1.7k次。心脏滴血漏洞HeartBleed CVE-2014-0160 是由heartbeat功能引入的,本文从深入码层面的分析该漏洞产生的原因_heartbleed代码分析

java读取ofd文档内容_ofd电子文档内容分析工具(分析文档、签章和证书)-程序员宅基地

文章浏览阅读1.4k次。前言ofd是国家文档标准,其对标的文档格式是pdf。ofd文档是容器格式文件,ofd其实就是压缩包。将ofd文件后缀改为.zip,解压后可看到文件包含的内容。ofd文件分析工具下载:点我下载。ofd文件解压后,可以看到如下内容: 对于xml文件,可以用文本工具查看。但是对于印章文件(Seal.esl)、签名文件(SignedValue.dat)就无法查看其内容了。本人开发一款ofd内容查看器,..._signedvalue.dat

基于FPGA的数据采集系统(一)_基于fpga的信息采集-程序员宅基地

文章浏览阅读1.8w次,点赞29次,收藏313次。整体系统设计本设计主要是对ADC和DAC的使用,主要实现功能流程为:首先通过串口向FPGA发送控制信号,控制DAC芯片tlv5618进行DA装换,转换的数据存在ROM中,转换开始时读取ROM中数据进行读取转换。其次用按键控制adc128s052进行模数转换100次,模数转换数据存储到FIFO中,再从FIFO中读取数据通过串口输出显示在pc上。其整体系统框图如下:图1:FPGA数据采集系统框图从图中可以看出,该系统主要包括9个模块:串口接收模块、按键消抖模块、按键控制模块、ROM模块、D.._基于fpga的信息采集

微服务 spring cloud zuul com.netflix.zuul.exception.ZuulException GENERAL-程序员宅基地

文章浏览阅读2.5w次。1.背景错误信息:-- [http-nio-9904-exec-5] o.s.c.n.z.filters.post.SendErrorFilter : Error during filteringcom.netflix.zuul.exception.ZuulException: Forwarding error at org.springframework.cloud..._com.netflix.zuul.exception.zuulexception

邻接矩阵-建立图-程序员宅基地

文章浏览阅读358次。1.介绍图的相关概念  图是由顶点的有穷非空集和一个描述顶点之间关系-边(或者弧)的集合组成。通常,图中的数据元素被称为顶点,顶点间的关系用边表示,图通常用字母G表示,图的顶点通常用字母V表示,所以图可以定义为:  G=(V,E)其中,V(G)是图中顶点的有穷非空集合,E(G)是V(G)中顶点的边的有穷集合1.1 无向图:图中任意两个顶点构成的边是没有方向的1.2 有向图:图中..._给定一个邻接矩阵未必能够造出一个图

随便推点

MDT2012部署系列之11 WDS安装与配置-程序员宅基地

文章浏览阅读321次。(十二)、WDS服务器安装通过前面的测试我们会发现,每次安装的时候需要加域光盘映像,这是一个比较麻烦的事情,试想一个上万个的公司,你天天带着一个光盘与光驱去给别人装系统,这将是一个多么痛苦的事情啊,有什么方法可以解决这个问题了?答案是肯定的,下面我们就来简单说一下。WDS服务器,它是Windows自带的一个免费的基于系统本身角色的一个功能,它主要提供一种简单、安全的通过网络快速、远程将Window..._doc server2012上通过wds+mdt无人值守部署win11系统.doc

python--xlrd/xlwt/xlutils_xlutils模块可以读xlsx吗-程序员宅基地

文章浏览阅读219次。python–xlrd/xlwt/xlutilsxlrd只能读取,不能改,支持 xlsx和xls 格式xlwt只能改,不能读xlwt只能保存为.xls格式xlutils能将xlrd.Book转为xlwt.Workbook,从而得以在现有xls的基础上修改数据,并创建一个新的xls,实现修改xlrd打开文件import xlrdexcel=xlrd.open_workbook('E:/test.xlsx') 返回值为xlrd.book.Book对象,不能修改获取sheett_xlutils模块可以读xlsx吗

关于新版本selenium定位元素报错:‘WebDriver‘ object has no attribute ‘find_element_by_id‘等问题_unresolved attribute reference 'find_element_by_id-程序员宅基地

文章浏览阅读8.2w次,点赞267次,收藏656次。运行Selenium出现'WebDriver' object has no attribute 'find_element_by_id'或AttributeError: 'WebDriver' object has no attribute 'find_element_by_xpath'等定位元素代码错误,是因为selenium更新到了新的版本,以前的一些语法经过改动。..............._unresolved attribute reference 'find_element_by_id' for class 'webdriver

DOM对象转换成jQuery对象转换与子页面获取父页面DOM对象-程序员宅基地

文章浏览阅读198次。一:模态窗口//父页面JSwindow.showModalDialog(ifrmehref, window, 'dialogWidth:550px;dialogHeight:150px;help:no;resizable:no;status:no');//子页面获取父页面DOM对象//window.showModalDialog的DOM对象var v=parentWin..._jquery获取父window下的dom对象

什么是算法?-程序员宅基地

文章浏览阅读1.7w次,点赞15次,收藏129次。算法(algorithm)是解决一系列问题的清晰指令,也就是,能对一定规范的输入,在有限的时间内获得所要求的输出。 简单来说,算法就是解决一个问题的具体方法和步骤。算法是程序的灵 魂。二、算法的特征1.可行性 算法中执行的任何计算步骤都可以分解为基本可执行的操作步,即每个计算步都可以在有限时间里完成(也称之为有效性) 算法的每一步都要有确切的意义,不能有二义性。例如“增加x的值”,并没有说增加多少,计算机就无法执行明确的运算。 _算法

【网络安全】网络安全的标准和规范_网络安全标准规范-程序员宅基地

文章浏览阅读1.5k次,点赞18次,收藏26次。网络安全的标准和规范是网络安全领域的重要组成部分。它们为网络安全提供了技术依据,规定了网络安全的技术要求和操作方式,帮助我们构建安全的网络环境。下面,我们将详细介绍一些主要的网络安全标准和规范,以及它们在实际操作中的应用。_网络安全标准规范