pyspark.pandas.DataFrame.var¶
-
DataFrame.
var
(axis: Union[int, str, None] = None, ddof: int = 1, numeric_only: bool = None) → Union[int, float, bool, str, bytes, decimal.Decimal, datetime.date, datetime.datetime, None, Series]¶ Return unbiased variance.
- Parameters
- axis{index (0), columns (1)}
Axis for the function to be applied on.
- ddofint, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
- numeric_onlybool, default None
Include only float, int, boolean columns. False is not supported. This parameter is mainly for pandas compatibility.
- Returns
- varscalar for a Series, and a Series for a DataFrame.
Examples
>>> df = ps.DataFrame({'a': [1, 2, 3, np.nan], 'b': [0.1, 0.2, 0.3, np.nan]}, ... columns=['a', 'b'])
On a DataFrame:
>>> df.var() a 1.00 b 0.01 dtype: float64
>>> df.var(axis=1) 0 0.405 1 1.620 2 3.645 3 NaN dtype: float64
>>> df.var(ddof=0) a 0.666667 b 0.006667 dtype: float64
On a Series:
>>> df['a'].var() 1.0
>>> df['a'].var(ddof=0) 0.6666666666666666