2. dataArray

dataArray contain a single dataset.

  • ndarray subclass containing matrix like data
  • attributes are linked to the data e.g. from a measurement or simulation parameters.
  • all numpy array functionality preserved as e.g. slicing, index tricks.
  • fit routine from scipy.optimize (leastsquare, differential-evolution,..)
  • read/write in human readable ASCII text including attributes or pickle.

dataArray creation can be from read ASCII files or ndarrays as data=js.dA(‘filename.dat’). See dataArray for details.

For Beginners:
  • The dataArray methods should not be used directly from this module.
  • Instead create a dataArray and use the methods from this object.

Example Read data and plot (see Reading ASCII files for more about reading data)

import jscatter as js
# read data with 16 intermediate scattering functions from NSE measurement of protein diffusion
i5=js.dL(js.examples.datapath+'/iqt_1hho.dat')
p=js.grace()
# plot the first 3 dataArrays
p.plot(i5[:3])

Example create/change/…

#create from array or read from
import jscatter as js
import numpy as np
x=np.r_[0:10:0.5]                                        # a list of values
D,A,q=0.45,0.99,1.2
data=js.dA(np.vstack([x,np.exp(-q**2*D*x),np.random.rand(len(x))*0.05]))    # creates dataArray
data.D=D;data.A=A;data.q=q
data.Y=data.Y*data.A                                     # change Y values
data[2]*=2                                               # change 3rd column
data.reason='just as a test'                             # add comment
data.Temperature=273.15+20                               # add attribut
data.savetxt('justasexample.dat')                        # save data
data2=js.dA('justasexample.dat')                         # read data into dataArray
data2.Y=data2.Y/data2.A
# use a method (from fitting or housekeeping)
data2.interp(np.r_[1:2:0.01]) # for interpolation

The dataarray module can be run standalone in a new project.

2.1. Class

dataArray dataArray (ndarray subclass) with attributes for fitting, plotting, filter.
  • dataArray creating by js.dA(‘filename.dat’) or from numpy arrays.
  • Array columns can be accessed as automatic generated attributes like .X,.Y,.eY (see protectedNames). or by incexing as *data[0] -> .X *
  • Corrsponding column indices are set by dataArray.setColumnIndex() (default X,Y,eY = 0,1,2).
  • Multidimensional fitting of 1D,2D,3D (.X,.Z,.W) data. .Y are used as function values at coordinates [.X,.Z,.W] in fitting.
  • Attributes can be set like: data.aName= 1.2345

2.2. Attributes

protectedNames Defined protected names which are not allowed as attribute names.
dataArray.showattr([maxlength, exclude]) Show data specific attributes with values as overview.
dataArray.attr Show data specific attribute names as sorted list of attribute names.
dataArray.getfromcomment(attrname) Extract a non number parameter from comment with attrname in front
dataArray.extract_comm([iname, deletechars, …]) Extracts not obvious attributes from comment and adds them to attributes.
dataArray.resumeAttrTxt([names, maxlength]) Resume attributes in text form.
dataArray.setattr(objekt[, prepend, keyadd]) Set (copy) attributes from objekt.
dataArray.setColumnIndex(*args, **kwargs) Set the column index where to find X,Y,Z and and errors eY, eX, eZ…..
dataArray.name Attribute name, mainly the filename of read data files.
dataArray.array Strip of all attributes and return a simple ndarray.
dataArray.argmax([axis, out]) Return indices of the maximum values along the given axis.
dataArray.argmin([axis, out]) Return indices of the minimum values along the given axis of a.

2.3. Fitting

dataArray.fit(model[, freepar, fixpar, …]) Least square fit to model that minimizes \chi^2 (uses scipy.optimize).
dataArray.modelValues(*args, **kwargs) Calculates modelValues of model after a fit
dataArray.setLimit(*args, **kwargs) Set upper and lower limits for parameters in least square fit.
dataArray.hasLimit Return existing limits.
dataArray.setConstrain(*args) Set constrains for constrained minimization in fit.
dataArray.hasConstrain Return list with defined constrained source code.
dataArray.makeErrPlot(*args, **kwargs) Creates a GracePlot for intermediate output from fit with residuals.
dataArray.makeNewErrPlot(*args, **kwargs) Creates a NEW ErrPlot without destroying the last.
dataArray.killErrPlot(*args, **kwargs) Kills ErrPlot
dataArray.detachErrPlot(*args, **kwargs) Detaches ErrPlot without killing it and returns a reference to it.
dataArray.showlastErrPlot(*args, **kwargs) Shows last ErrPlot as created by makeErrPlot with last fit result.
dataArray.savelastErrPlot(*args, **kwargs) Saves errplot to file with filename.
dataArray.polyfit([X, deg, function, efunction]) Interpolated values for Y at values X using a polyfit.
dataArray.interpolate(X[, left, right, deg]) Piecewise interpolated values of Y at new positions X.
dataArray.interpAll([X, left, right]) Piecewise linear interpolated values of all columns at new X values.
dataArray.interp(X[, left, right]) Piecewise linear interpolated values of Y at position X returning only Y (faster).

2.4. Housekeeping

dataArray.savetxt(name[, fmt]) Saves data in ASCII text file (optional gzipped).
dataArray.isort([col]) Sort along a column !!in place
dataArray.where(condition) Copy with lines where condition is fulfilled.
dataArray.prune([lower, upper, number, …]) Reduce number of values between upper and lower limits by selection or averaging in intervals.
dataArray.merge(others[, axis, isort]) Merges dataArrays to self !!NOT in place!!
dataArray.concatenate(others[, axis, isort]) Concatenates the dataArray[s] others to self !NOT IN PLACE!
dataArray.addZeroColumns([n]) Copy with n new zero columns at the end !!NOT in place!!
dataArray.addColumn([n, values]) Copy with new columns at the end populated by values !!NOT in place!!
dataArray.nakedCopy() Deepcopy without attributes, thus only the data.
dataArray.regrid([xgrid, zgrid, wgrid, …]) Regrid multidimensional data to a regular grid.

2.5. Convenience

zeros(*args, **kwargs) dataArray filled with zeros.
ones(*args, **kwargs) dataArray filled with ones.
fromFunction(function, X, *args, **kwargs) Evaluation of Y=function(X) for all X and returns a dataArray with X,Y

jscatter.dataarray.protectedNames = ['X', 'Y', 'eY', 'eX', 'Z', 'eZ', 'W', 'eW']

Defined protected names which are not allowed as attribute names.

class jscatter.dataarray.dataArray[source]

Bases: jscatter.dataarray.dataArrayBase

dataArray (ndarray subclass) with attributes for fitting, plotting, filter.

  • A subclass of numpy ndarrays with attributes to add parameters describing the data.
  • Allows fitting, plotting, filtering, prune and more.
  • .X, .Y, .eY link to specified columns.
  • Numpy array functionality is preserved.
  • dataArray creation parameters (below) mainly determine how a file is read from file.
  • .Y are used as function values at coordinates [.X,.Z,.W] in fitting.
Parameters:
input : string, ndarray

Object to create a dataArray from. - Filenames with extension ‘.gz’ are decompressed (gzip). - filenames with asterisk like exda=dataList(objekt=’aa12*’) as input for multiple files. - An in-memory stream for text I/O (Python3 -> io.StringIO, Python2.7 -> StringIO ).

dtype : data type

dtype of final dataArray, see numpy.ndarray

index : int, default 0

Index of the dataset in the given input to select one from multiple.

block : string,slice (or slice indices), default None
Indicates separation of dataArray in file if multiple are present.
  • None : Auto detection of blocks according to change between datalines and non-datalines.
    A new dataArray is created if data and attributes are present.
  • string : If block is found at beginning of line a new dataArray is created and appended. block can be something like “next” or the first parameter name of a new block as block=’Temp’
  • slice or slice indices : block=slice(2,100,3) slices the filelines in file as lines[i:j:k] . If only indices are given these are converted to slice. XYeYeX : list integers, default=[0,1,2,None,None,None]

Columns for X, Y, eY, eX, Z, eZ. Change later with eg. setColumnIndex(3,5,32). Values in dataArray can be changed by dataArray.X=[list of length X ].

usecols : list of integer

Use only given columns and ignore others (after skiplines).

ignore : string, default ‘#’

Ignore lines starting with string e.g. ‘#’. For more complex lines to ignore use skiplines.

replace : dictionary of [string,regular expression object]:string

String replacement in read lines as {‘old’:’new’,…} (after takeline). String pairs in this dictionary are replaced in each line. This is done prior to determining line type and can be used to convert strings to number or ‘,’:’.’. If dict key is a regular expression object (e.g. rH=re.compile(‘Hd+’) ),it is replaced by string. See python module re for syntax.

skiplines : boolean function, list of string or single string

Skip if line meets condition (only data lines). Function gets the list of words in a data line. Examples:

  • lambda words: any(w in words for w in [‘’,’ ‘,’NAN’,’‘*]) #with exact match
  • lambda words: any(float(w)>3.1411 for w in words)
  • lambda words: len(words)==1

If a list is given, the lambda function is generated automatically as in above example. If single string is given, it is tested if string is a substring of a word ( ‘abc’ in ‘123abc456’)

delimiter : string

Separator between data fields in a line, default any whitespace. E.g. ‘t’ tabulator

takeline : string,list of string, function

Filter lines to be included (all lines) e.g. to select line starting with ‘ATOM’. Should be combined with: replace (replace starting word by number {‘ATOM’:1} to be detected as data) and usecols to select the needed columns. Examples (function gets words in line):

  • lambda words: any(w in words for w in [‘ATOM’,’CA’]) # one of both words somewhere in line
  • lambda w: (w[0]==’ATOM’) & (w[2]==’CA’) # starts with ‘ATOM’ and third is ‘CA’

For word or list of words first example is generated automatically.

lines2parameter : list of integer

List of lines i which to prepend with ‘line_i’ to be found as parameter line_i. Used to mark lines with parameters without name (only numbers in a line as in .pdh files in the header). E.g. to skip the first lines.

XYeYeX : list integers, default=[0,1,2,None,None,None]

Sets columns for X, Y, eY, eX, Z, eZ. This is ignored for dataList and dataArray objects as these have defined columns. Change later by: data.setColumnIndex .

encoding : None, ‘utf-8’, ‘cp1252’, ‘ascii’

The encoding of the files read. By default the system default encoding is used. Others python2.7=’ascii’, python3=’utf-8’, Windows_english=’cp1252’, Windows_german=’cp1251’

Returns:
dataArray

Notes

  • Attributes to avoid (they are in the name space of numpy ndarrays): T,mean,max,min,… These names are substitute by appended ‘_’ (underscore) if found in read data. Get a complete list by “dir(np.array(0))”.

  • Avoid attribute names including special math characters as ” ** + - / & “. Any char that can be interpreted as a function (eg datalist.up-down) will be interpreted from python as : updown=datalist.up operator(minus) down and result in: AttributeError. To get the values use getattr(dataList,’up-down’) or avoid usage of these characters.

  • If an attribute ‘columnname’ exists with a string containing columnnames separated by semicolon the corresponding columns can be accessed in 2 ways ( columnname=’wavevector; Iqt’ ):

    • attribute with prepended underscore ‘_’+’name’ => data._Iqt
    • columnname string used as index => data[‘Iqt’]

    From the names all char like “+-*/()[]()|§$%&#><°^, ” are deleted.

    The columnname string is saved with the data and is restored when rereading the data.

    This is intended for reading and not writing.

Data access/change

exa=js.dA('afile.dat')
exa.columnname='t; iqt; e+iqt'  # if not given in read file
exa.eY=exa.Y*0.05               # default for X, Y is column 0,1; see XYeYeX or .setColumnIndex ; read+write
exa[-1]=exa[1]**4               # direct indexing of columns; read+write
exa[-1,::2]=exa[1,::2]*4        # direct indexing of columns; read+write; each second is used (see numpy)
eq1=exa[2]*exa[0]*4             # read+write
iq2=exa._iqt*4                  # access by underscore name; only read
eq3=exa._eiqt*exa._t*4          # read
iq4=exa['iqt']*4                # access like dictionary; only read
eq5=exa['eiqt']*exa['t']*4      # read
aa=np.r_[[np.r_[1:100],np.r_[1:100]**2]] #load from numpy array
daa=js.dA(aa)                            # with shape
daa.Y=daa.Y*2                            # change Y values; same as daa[1]
dbb=js.zeros((4,12))                     # empty dataArray
dbb.X=np.r_[1:13]                        # set X
dbb.Y=np.r_[1:13]**0.5                   # set Y
dbb[2]=dbb.X*5
dbb[3]=0.5                               # set 4th column
dbb.a=0.2345
dbb.setColumnIndex(ix=2,iy=1,iey=None)   # change column index for X,Y, end no eY

Selecting

ndbb=dbb[:,dbb.X>20]            # only X>20
ndbb=dbb[:,dbb.X>dbb.Y/dbb.a]   # only X>Y/a

Read/write

import jscatter as js
#load data into dataArray from ASCII file, here load the third datablock from the file.
daa=js.dA('./exampleData/iqt_1hho.dat',index=2)
dbb=js.ones((4,12))
dbb.ones=11111
dbb.save('folder/ones.dat')
dbb.save('folder/ones.dat.gz')  # gziped file

Rules for reading of ASCII files

How files are interpreted :

Reads simple formats as tables with rows and columns like numpy.loadtxt.
The difference is how to treat additional information like attributes or comments and non float data.

Line format rules: A dataset consists of comments, attributes and data (and optional other datasets).

First two words in a line decide what it is:
  • string + value -> attribute with attribute name and list of values
  • string + string -> comment ignore or convert to attribute by getfromcomment
  • value + value -> data line of an array; in sequence without break, input for the ndarray
  • single words -> are appended to comment
  • string+@unique_name-> link to other dataArray with unique_name

Even complex ASCII file can be read with a few changes as options.

Datasets are given as blocks of attributes and data.

A new dataArray is created if:

  • a data block with a parameter block (preceded or appended) is found.
  • a keyword as first word in line is found: - Keyword can be eg. the name of the first parameter. - Blocks are separated as start or end of a number data block (like a matrix). - It is checked if parameters are prepended or append to the datablock. - If both is used, set block to the first keyword in first line of new block (name of the first parameter).

Example of an ASCII file with attributes temp, pressure, name:

this is just a comment or description of the data
temp     293
pressure 1013 14
name     temp1bsa
0.854979E-01  0.178301E+03  0.383044E+02
0.882382E-01  0.156139E+03  0.135279E+02
0.909785E-01  0.150313E+03  0.110681E+02
0.937188E-01  0.147430E+03  0.954762E+01
0.964591E-01  0.141615E+03  0.846613E+01
0.991995E-01  0.141024E+03  0.750891E+01
0.101940E+00  0.135792E+03  0.685011E+01
0.104680E+00  0.140996E+03  0.607993E+01

this is just a second comment
temp     393
pressure 1011 12
name     temp2bsa
0.236215E+00  0.107017E+03  0.741353E+00
0.238955E+00  0.104532E+03  0.749095E+00
0.241696E+00  0.104861E+03  0.730935E+00
0.244436E+00  0.104052E+03  0.725260E+00
0.247176E+00  0.103076E+03  0.728606E+00
0.249916E+00  0.101828E+03  0.694907E+00
0.252657E+00  0.102275E+03  0.712851E+00
0.255397E+00  0.102052E+03  0.702520E+00
0.258137E+00  0.100898E+03  0.690019E+00

optional:

  • string + @name: Link to other data in same file with name given as “name”. Content of @name is used as identifier. Think of an attribute with 2dim data.

Reading of complex files with filtering of specific information To read something like a pdb structure file with lines like

...
ATOM      1  N   LYS A   1       3.246  10.041  10.379  1.00  5.28           N
ATOM      2  CA  LYS A   1       2.386  10.407   9.247  1.00  7.90           C
ATOM      3  C   LYS A   1       2.462  11.927   9.098  1.00  7.93           C
ATOM      4  O   LYS A   1       2.582  12.668  10.097  1.00  6.28           O
ATOM      5  CB  LYS A   1       0.946   9.964   9.482  1.00  3.54           C
ATOM      6  CG  LYS A   1      -0.045  10.455   8.444  1.00  3.75           C
ATOM      7  CD  LYS A   1      -1.470  10.062   8.818  1.00  2.85           C
ATOM      8  CE  LYS A   1      -2.354   9.922   7.589  1.00  3.83           C
ATOM      9  NZ  LYS A   1      -3.681   9.377   7.952  1.00  1.78           N
...

combine takeline, replace and usecols.

usecols=[6,7,8] selects the columns as x,y,z positions

# select all atoms
xyz = js.dA('3rn3.pdb',takeline=lambda w:w[0]=='ATOM',replace={'ATOM':1},usecols=[6,7,8])
# select only CA atoms
xyz = js.dA('3rn3.pdb',takeline=lambda w:(w[0]=='ATOM') & (w[2]=='CA'),replace={'ATOM':1},usecols=[6,7,8])
# in PDB files different atomic structures are separate my "MODEL","ENDMODEL" lines.
# We might load all by using block
xyz = js.dA('3rn3.pdb',takeline=lambda w:(w[0]=='ATOM') & (w[2]=='CA'),
                       replace={'ATOM':1},usecols=[6,7,8],block='MODEL')
T

Same as self.transpose(), except that self is returned if self.ndim < 2.

Examples

>>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1.,  2.],
       [ 3.,  4.]])
>>> x.T
array([[ 1.,  3.],
       [ 2.,  4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1.,  2.,  3.,  4.])
>>> x.T
array([ 1.,  2.,  3.,  4.])
addColumn(n=1, values=0)

Copy with new columns at the end populated by values !!NOT in place!!

Parameters:
n : int

Number of columns to append

values : float, list of float

Values to append in columns as data[-n:]=values

addZeroColumns(n=1)

Copy with n new zero columns at the end !!NOT in place!!

Parameters:
n : int

Number of columns to append

all(axis=None, out=None, keepdims=False)

Returns True if all elements evaluate to True.

Refer to numpy.all for full documentation.

See also

numpy.all
equivalent function
any(axis=None, out=None, keepdims=False)

Returns True if any of the elements of a evaluate to True.

Refer to numpy.any for full documentation.

See also

numpy.any
equivalent function
argmax(axis=None, out=None)

Return indices of the maximum values along the given axis.

Refer to numpy.argmax for full documentation.

See also

numpy.argmax
equivalent function
argmin(axis=None, out=None)

Return indices of the minimum values along the given axis of a.

Refer to numpy.argmin for detailed documentation.

See also

numpy.argmin
equivalent function
argpartition(kth, axis=-1, kind='introselect', order=None)

Returns the indices that would partition this array.

Refer to numpy.argpartition for full documentation.

New in version 1.8.0.

See also

numpy.argpartition
equivalent function
argsort(axis=-1, kind='quicksort', order=None)

Returns the indices that would sort this array.

Refer to numpy.argsort for full documentation.

See also

numpy.argsort
equivalent function
array

Strip of all attributes and return a simple ndarray.

astype(dtype, order='K', casting='unsafe', subok=True, copy=True)

Copy of the array, cast to a specified type.

Parameters:
dtype : str or dtype

Typecode or data-type to which the array is cast.

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’.

casting : {‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional

Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility.

  • ‘no’ means the data types should not be cast at all.
  • ‘equiv’ means only byte-order changes are allowed.
  • ‘safe’ means only casts which can preserve values are allowed.
  • ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.
  • ‘unsafe’ means any data conversions may be done.
subok : bool, optional

If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array.

copy : bool, optional

By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead of a copy.

Returns:
arr_t : ndarray

Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype, order.

Raises:
ComplexWarning

When casting from complex to float or int. To avoid this, one should use a.real.astype(t).

Notes

Starting in NumPy 1.9, astype method now returns an error if the string dtype to cast to is not long enough in ‘safe’ casting mode to hold the max value of integer/float array that is being casted. Previously the casting was allowed even if the result was truncated.

Examples

>>> x = np.array([1, 2, 2.5])
>>> x
array([ 1. ,  2. ,  2.5])
>>> x.astype(int)
array([1, 2, 2])
attr

Show data specific attribute names as sorted list of attribute names.

base

Base object if memory is from some other object.

Examples

The base of an array that owns its memory is None:

>>> x = np.array([1,2,3,4])
>>> x.base is None
True

Slicing creates a view, whose memory is shared with x:

>>> y = x[2:]
>>> y.base is x
True
byteswap(inplace)

Swap the bytes of the array elements

Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place.

Parameters:
inplace : bool, optional

If True, swap bytes in-place, default is False.

Returns:
out : ndarray

The byteswapped array. If inplace is True, this is a view to self.

Examples

>>> A = np.array([1, 256, 8755], dtype=np.int16)
>>> map(hex, A)
['0x1', '0x100', '0x2233']
>>> A.byteswap(True)
array([  256,     1, 13090], dtype=int16)
>>> map(hex, A)
['0x100', '0x1', '0x3322']

Arrays of strings are not swapped

>>> A = np.array(['ceg', 'fac'])
>>> A.byteswap()
array(['ceg', 'fac'],
      dtype='|S3')
choose(choices, out=None, mode='raise')

Use an index array to construct a new array from a set of choices.

Refer to numpy.choose for full documentation.

See also

numpy.choose
equivalent function
clip(min=None, max=None, out=None)

Return an array whose values are limited to [min, max]. One of max or min must be given.

Refer to numpy.clip for full documentation.

See also

numpy.clip
equivalent function
compress(condition, axis=None, out=None)

Return selected slices of this array along given axis.

Refer to numpy.compress for full documentation.

See also

numpy.compress
equivalent function
concatenate(others, axis=1, isort=None)

Concatenates the dataArray[s] others to self !NOT IN PLACE!

and add all attributes from others.

Parameters:
others : dataArray, dataList, list of dataArray

Objects to concatenate with same shape as self.

axis : integer

Axis along to concatenate see numpy.concatenate

isort : integer

Sort array along column isort =i

Returns:
dataArray with merged attributes and isorted

Notes

See numpy.concatenate

conj()

Complex-conjugate all elements.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
conjugate()

Return the complex conjugate, element-wise.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
copy(order='C')

Return a copy of the array.

Parameters:
order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and :func:numpy.copy are very similar, but have different default values for their order= arguments.)

See also

numpy.copy, numpy.copyto

Examples

>>> x = np.array([[1,2,3],[4,5,6]], order='F')
>>> y = x.copy()
>>> x.fill(0)
>>> x
array([[0, 0, 0],
       [0, 0, 0]])
>>> y
array([[1, 2, 3],
       [4, 5, 6]])
>>> y.flags['C_CONTIGUOUS']
True
ctypes

An object to simplify the interaction of the array with the ctypes module.

This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library.

Parameters:
None
Returns:
c : Python object

Possessing attributes data, shape, strides, etc.

See also

numpy.ctypeslib

Notes

Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes):

  • data: A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_[‘data’][0].
  • shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype(‘p’) on this platform. This base-type could be c_int, c_long, or c_longlong depending on the platform. The c_intp type is defined accordingly in numpy.ctypeslib. The ctypes array contains the shape of the underlying array.
  • strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array.
  • data_as(obj): Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)).
  • shape_as(obj): Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short).
  • strides_as(obj): Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong).

Be careful using the ctypes attribute - especially on temporary arrays or arrays constructed on the fly. For example, calling (a+b).ctypes.data_as(ctypes.c_void_p) returns a pointer to memory that is invalid because the array created as (a+b) is deallocated before the next Python statement. You can avoid this problem using either c=a+b or ct=(a+b).ctypes. In the latter case, ct will hold a reference to the array until ct is deleted or re-assigned.

If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as parameter attribute which will return an integer equal to the data attribute.

Examples

>>> import ctypes
>>> x
array([[0, 1],
       [2, 3]])
>>> x.ctypes.data
30439712
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
<ctypes.LP_c_long object at 0x01F01300>
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
c_long(0)
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
c_longlong(4294967296L)
>>> x.ctypes.shape
<numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
>>> x.ctypes.shape_as(ctypes.c_long)
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides_as(ctypes.c_longlong)
<numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
cumprod(axis=None, dtype=None, out=None)

Return the cumulative product of the elements along the given axis.

Refer to numpy.cumprod for full documentation.

See also

numpy.cumprod
equivalent function
cumsum(axis=None, dtype=None, out=None)

Return the cumulative sum of the elements along the given axis.

Refer to numpy.cumsum for full documentation.

See also

numpy.cumsum
equivalent function
data

Python buffer object pointing to the start of the array’s data.

detachErrPlot(*args, **kwargs)[source]

Detaches ErrPlot without killing it and returns a reference to it.

diagonal(offset=0, axis1=0, axis2=1)

Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed.

Refer to numpy.diagonal() for full documentation.

See also

numpy.diagonal
equivalent function
dot(b, out=None)

Dot product of two arrays.

Refer to numpy.dot for full documentation.

See also

numpy.dot
equivalent function

Examples

>>> a = np.eye(2)
>>> b = np.ones((2, 2)) * 2
>>> a.dot(b)
array([[ 2.,  2.],
       [ 2.,  2.]])

This array method can be conveniently chained:

>>> a.dot(b).dot(b)
array([[ 8.,  8.],
       [ 8.,  8.]])
dtype

Data-type of the array’s elements.

Parameters:
None
Returns:
d : numpy dtype object

See also

numpy.dtype

Examples

>>> x
array([[0, 1],
       [2, 3]])
>>> x.dtype
dtype('int32')
>>> type(x.dtype)
<type 'numpy.dtype'>
dump(file)

Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load.

Parameters:
file : str

A string naming the dump file.

dumps()

Returns the pickle of the array as a string. pickle.loads or numpy.loads will convert the string back to an array.

Parameters:
None
errPlot(*args, **kwargs)[source]

Plot into an existing ErrPlot. See Graceplot.plot for details.

errPlotTitle(*args, **kwargs)[source]
errPlottitle(*args, **kwargs)
errplot

Errplot handle

extract_comm(iname=0, deletechars='', replace={})

Extracts not obvious attributes from comment and adds them to attributes.

The iname_th word is selected as attribute and all numbers are taken.

Parameters:
deletechars : string

Chars to delete

replace : dictionary of strings

Strings to replace {‘,’:’.’,’as’:’xx’,’r’:‘3.14’,…}

iname : integer

Which string to use as attr name; in example 3 for ‘wavelength’

Notes

example : w [nm] 632 +- 2,5 wavelength extract_comm(iname=3,replace={‘,’:’.’}) result .wavelength=[632, 2.5]

fill(value)

Fill the array with a scalar value.

Parameters:
value : scalar

All elements of a will be assigned this value.

Examples

>>> a = np.array([1, 2])
>>> a.fill(0)
>>> a
array([0, 0])
>>> a = np.empty(2)
>>> a.fill(1)
>>> a
array([ 1.,  1.])
fit(model, freepar={}, fixpar={}, mapNames={}, xslice=slice(None, None, None), condition=None, output=True, **kw)

Least square fit to model that minimizes \chi^2 (uses scipy.optimize).

See dataList.fit(), but only first parameter is used if more than one given.

flags

Information about the memory layout of the array.

Notes

The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access.

Only the UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags.

The array flags cannot be set arbitrarily:

  • UPDATEIFCOPY can only be set False.
  • ALIGNED can only be set True if the data is truly aligned.
  • WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string.

Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays.

Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true.

Attributes:
C_CONTIGUOUS (C)

The data is in a single, C-style contiguous segment.

F_CONTIGUOUS (F)

The data is in a single, Fortran-style contiguous segment.

OWNDATA (O)

The array owns the memory it uses or borrows it from another object.

WRITEABLE (W)

The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception.

ALIGNED (A)

The data and all elements are aligned appropriately for the hardware.

UPDATEIFCOPY (U)

This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array.

FNC

F_CONTIGUOUS and not C_CONTIGUOUS.

FORC

F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).

BEHAVED (B)

ALIGNED and WRITEABLE.

CARRAY (CA)

BEHAVED and C_CONTIGUOUS.

FARRAY (FA)

BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.

flat

A 1-D iterator over the array.

This is a numpy.flatiter instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object.

See also

flatten
Return a copy of the array collapsed into one dimension.

flatiter

Examples

>>> x = np.arange(1, 7).reshape(2, 3)
>>> x
array([[1, 2, 3],
       [4, 5, 6]])
>>> x.flat[3]
4
>>> x.T
array([[1, 4],
       [2, 5],
       [3, 6]])
>>> x.T.flat[3]
5
>>> type(x.flat)
<type 'numpy.flatiter'>

An assignment example:

>>> x.flat = 3; x
array([[3, 3, 3],
       [3, 3, 3]])
>>> x.flat[[1,4]] = 1; x
array([[3, 1, 3],
       [3, 1, 3]])
flatten(order='C')

Return a copy of the array collapsed into one dimension.

Parameters:
order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if a is Fortran contiguous in memory, row-major order otherwise. ‘K’ means to flatten a in the order the elements occur in memory. The default is ‘C’.

Returns:
y : ndarray

A copy of the input array, flattened to one dimension.

See also

ravel
Return a flattened array.
flat
A 1-D flat iterator over the array.

Examples

>>> a = np.array([[1,2], [3,4]])
>>> a.flatten()
array([1, 2, 3, 4])
>>> a.flatten('F')
array([1, 3, 2, 4])
getfield(dtype, offset=0)

Returns a field of the given array as a certain type.

A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes.

Parameters:
dtype : str or dtype

The data type of the view. The dtype size of the view can not be larger than that of the array itself.

offset : int

Number of bytes to skip before beginning the element view.

Examples

>>> x = np.diag([1.+1.j]*2)
>>> x[1, 1] = 2 + 4.j
>>> x
array([[ 1.+1.j,  0.+0.j],
       [ 0.+0.j,  2.+4.j]])
>>> x.getfield(np.float64)
array([[ 1.,  0.],
       [ 0.,  2.]])

By choosing an offset of 8 bytes we can select the complex part of the array for our view:

>>> x.getfield(np.float64, offset=8)
array([[ 1.,  0.],
   [ 0.,  4.]])
getfromcomment(attrname)

Extract a non number parameter from comment with attrname in front

If multiple names start with parname first one is used. Used comment line is deleted from comments.

Parameters:
attrname : string

Name of the parameter in first place

hasConstrain

Return list with defined constrained source code.

hasLimit

Return existing limits.

See dataList.has_limit()

has_limit

Return existing limits.

See dataList.has_limit()

imag

The imaginary part of the array.

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.imag
array([ 0.        ,  0.70710678])
>>> x.imag.dtype
dtype('float64')
interp(X, left=None, right=None)

Piecewise linear interpolated values of Y at position X returning only Y (faster).

Parameters:
X : array,float

Values to interpolate

left : float

Value to return for X < X[0], default is Y[0].

right : float

Value to return for X > X[-1], defaults is Y[-1]

Returns:
array

Notes

See numpy.interp. Sorts automatically along X.

interpAll(X=None, left=None, right=None)

Piecewise linear interpolated values of all columns at new X values.

Parameters:
X : array like

Values where to interpolate

left : float

Value to return for X < X[0], default is Y[0].

right : float

Value to return for X > X[-1], defaults is Y[-1].

Returns:
dataArray, here with X,Y,Z preserved and all attributes

Notes

See numpy.interp. Sorts automatically along X.

interpolate(X, left=None, right=None, deg=1)

Piecewise interpolated values of Y at new positions X.

Parameters:
X : array,float

Values to interpolate

left : float

Value to return for X < X[0], default is Y[0].

right : float

Value to return for X > X[-1], defaults is Y[-1].

deg : integer, default =1

Polynom degree for interpolation along attribute. For deg=1 values outside the data range are substituted by nearest value (see np.interp) For deg>1 a spline extrapolation scipy.interpolate.interp1d is used. Outliers result in Nan.

Returns:
dataArray

Notes

See numpy.interp. Sorts automatically along X

isort(col='X')

Sort along a column !!in place

Parameters:
col : ‘X’,’Y’,’Z’,’eX’,’eY’,’eZ’ or 0,1,2,…

Column to sort along

item(*args)

Copy an element of an array to a standard Python scalar and return it.

Parameters:
*args : Arguments (variable number and type)
  • none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned.
  • int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return.
  • tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array.
Returns:
z : Standard Python scalar object

A copy of the specified element of the array as a suitable Python scalar

Notes

When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.

item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.item(3)
2
>>> x.item(7)
5
>>> x.item((0, 1))
1
>>> x.item((2, 2))
3
itemset(*args)

Insert scalar into an array (scalar is cast to array’s dtype, if possible)

There must be at least 1 argument, and define the last argument as item. Then, a.itemset(*args) is equivalent to but faster than a[args] = item. The item should be a scalar value and args must select a single item in the array a.

Parameters:
*args : Arguments

If one argument: a scalar, only used in case a is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple.

Notes

Compared to indexing syntax, itemset provides some speed increase for placing a scalar into a particular location in an ndarray, if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using itemset (and item) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.itemset(4, 0)
>>> x.itemset((2, 2), 9)
>>> x
array([[3, 1, 7],
       [2, 0, 3],
       [8, 5, 9]])
itemsize

Length of one array element in bytes.

Examples

>>> x = np.array([1,2,3], dtype=np.float64)
>>> x.itemsize
8
>>> x = np.array([1,2,3], dtype=np.complex128)
>>> x.itemsize
16
killErrPlot(*args, **kwargs)[source]

Kills ErrPlot

If filename given the plot is saved.

makeErrPlot(*args, **kwargs)[source]

Creates a GracePlot for intermediate output from fit with residuals.

ErrPlot is updated only if consecutive steps need more than 2 seconds. The plot can be accessed later as .errplot .

Parameters:
title : string

Title of plot.

residuals : string
Plot type of residuals.
  • ‘absolut’ or ‘a’ absolute residuals
  • ‘relative’ or ‘r’ relative =res/y
showfixpar : boolean (None,False,0 or True,Yes,1)

Show the fixed parameters in errplot.

yscale,xscale : ‘n’,’l’ for ‘normal’, ‘logarithmic’

Y scale, log or normal (linear)

fitlinecolor : int, [int,int,int]

Color for fit lines (or line style as in plot). If not given same color as data.

makeNewErrPlot(*args, **kwargs)[source]

Creates a NEW ErrPlot without destroying the last. See makeErrPlot for details.

Parameters:
**kwargs

Keyword arguments passed to makeErrPlot.

max(axis=None, out=None)

Return the maximum along a given axis.

Refer to numpy.amax for full documentation.

See also

numpy.amax
equivalent function
mean(axis=None, dtype=None, out=None, keepdims=False)

Returns the average of the array elements along given axis.

Refer to numpy.mean for full documentation.

See also

numpy.mean
equivalent function
merge(others, axis=1, isort=None)

Merges dataArrays to self !!NOT in place!!

Parameters:
axis : integer

Axis along to concatenate see numpy.concatenate

isort : integer

Sort array along column isort =i

min(axis=None, out=None, keepdims=False)

Return the minimum along a given axis.

Refer to numpy.amin for full documentation.

See also

numpy.amin
equivalent function
modelValues(*args, **kwargs)

Calculates modelValues of model after a fit

See dataList.modelValues()

nakedCopy()

Deepcopy without attributes, thus only the data.

name

Attribute name, mainly the filename of read data files.

nbytes

Total bytes consumed by the elements of the array.

Notes

Does not include memory consumed by non-element attributes of the array object.

Examples

>>> x = np.zeros((3,5,2), dtype=np.complex128)
>>> x.nbytes
480
>>> np.prod(x.shape) * x.itemsize
480
ndim

Number of array dimensions.

Examples

>>> x = np.array([1, 2, 3])
>>> x.ndim
1
>>> y = np.zeros((2, 3, 4))
>>> y.ndim
3
newbyteorder(new_order='S')

Return the array with the same data viewed with a different byte order.

Equivalent to:

arr.view(arr.dtype.newbytorder(new_order))

Changes are also made in all fields and sub-arrays of the array data type.

Parameters:
new_order : string, optional

Byte order to force; a value from the byte order specifications below. new_order codes can be any of:

  • ‘S’ - swap dtype from current to opposite endian
  • {‘<’, ‘L’} - little endian
  • {‘>’, ‘B’} - big endian
  • {‘=’, ‘N’} - native order
  • {‘|’, ‘I’} - ignore (no change to byte order)

The default value (‘S’) results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of ‘B’ or ‘b’ or ‘biggish’ are valid to specify big-endian.

Returns:
new_arr : array

New array object with the dtype reflecting given change to the byte order.

nonzero()

Return the indices of the elements that are non-zero.

Refer to numpy.nonzero for full documentation.

See also

numpy.nonzero
equivalent function
partition(kth, axis=-1, kind='introselect', order=None)

Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined.

New in version 1.8.0.

Parameters:
kth : int or sequence of ints

Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once.

axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘introselect’}, optional

Selection algorithm. Default is ‘introselect’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.partition
Return a parititioned copy of an array.
argpartition
Indirect partition.
sort
Full sort.

Notes

See np.partition for notes on the different algorithms.

Examples

>>> a = np.array([3, 4, 2, 1])
>>> a.partition(3)
>>> a
array([2, 1, 3, 4])
>>> a.partition((1, 3))
array([1, 2, 3, 4])
polyfit(X=None, deg=1, function=None, efunction=None)

Interpolated values for Y at values X using a polyfit.

Extrapolation is done by using a polynominal fit over all Y with weights eY only if eY is present. To get the correct result the output needs to be evaluated by the inverse of function.

Parameters:
X : arraylike

X values where to calculate Y If None then X=self.X e.g. for smoothing/extrapolation.

deg : int

Degree of polynom used for interpolation see numpy.polyfit

function : function or lambda

Used prior to polyfit as polyfit( function(Y) )

efunction : function or lambda

Used prior to polyfit for eY as weights = efunction(eY) efunction should be build according to error propagation.

Returns:
dataArray

Notes

Remember to reverse the function.!!!!!!!!!!

prod(axis=None, dtype=None, out=None, keepdims=False)

Return the product of the array elements over the given axis

Refer to numpy.prod for full documentation.

See also

numpy.prod
equivalent function
prune(lower=None, upper=None, number=None, kind='lin', col='X', weight='eY', keep=None, type='mean')

Reduce number of values between upper and lower limits by selection or averaging in intervals.

Reduces dataArrays size. New values may be determined as next value, average in intervals, sum in intervals or by selection.

Parameters:
lower : float

Lower bound

upper : float

Upper bound

number : int

Number of points between [lower,upper] resulting in number intervals.

kind : {‘log’,’lin’}, default ‘lin’
Determines how new points were distributed.
  • ‘log’ closest values in log distribution with number points in [lower,upper]
  • ‘lin’ closest values in lin distribution with number points in [lower,upper]
  • If number is None all points between [lower,upper] are used.
type : {None,’mean’,’error’,’mean+error’,’sum’} default ‘mean’
How to determine the value for a new point.
  • None next original value.
  • ‘sum’ Sum in intervals. The col column will show the average (=sum/numberofvalues). The last column contains the number of summed values.
  • ‘mean’ mean values in interval;
  • ‘mean+std’ calcs mean and adds error columns as standard deviation in intervals. Can be used if no errors are present to generate errors as std in intervals. For single values the error is interpolated from neighbouring values. ! For less pruned data error may be bad defined if only a few points are averaged.
col : ‘X’,’Y’….., or int, default ‘X’

Column to prune along X,Y,Z or index of column.

weight : None, protectedNames as ‘eY’ or int

Column for weight as 1/err**2 in ‘mean’ calculation, weight column gets new error sqrt(1/sum_i(1/err_i**2))

  • None is equal weight
  • If weight not existing or contains zeros equal weights are used.
keep : list of int

List of indices to keep in any case e.g. keep=np.r_[0:10,90:101]

Returns:
dataArray with values pruned to *number* values.

Notes

Attention !!!!
Dependent on the distribution of original data a lower number of points can be the result. e.g. think of noisy data between 4 and 5 and a lin distribution from 1 to 10 of 9 points as there are no data between 5 and 10 these will all result in 5 and be set to 5 to be unique.

Examples

self.prune(number=13,col='X',type='mean+',weight='eY')
# or
self.prune(lower=0.1,number=13)
ptp(axis=None, out=None)

Peak to peak (maximum - minimum) value along a given axis.

Refer to numpy.ptp for full documentation.

See also

numpy.ptp
equivalent function
put(indices, values, mode='raise')

Set a.flat[n] = values[n] for all n in indices.

Refer to numpy.put for full documentation.

See also

numpy.put
equivalent function
ravel([order])

Return a flattened array.

Refer to numpy.ravel for full documentation.

See also

numpy.ravel
equivalent function
ndarray.flat
a flat iterator on the array.
real

The real part of the array.

See also

numpy.real
equivalent function

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.real
array([ 1.        ,  0.70710678])
>>> x.real.dtype
dtype('float64')
regrid(xgrid=None, zgrid=None, wgrid=None, method='nearest', fill_value=0)

Regrid multidimensional data to a regular grid.

Add missing points in a regular grid (e.g. an image) or interpolate on a regular grid from irregular arranged data points. For 1D data use .interpolate

Parameters:
xgrid : array, None

New grid in x direction. If None the unique values in .X are used.

zgrid :array, None

New grid in z direction. If None the unique values in .Z are used.

wgrid :array, None

New grid in w direction. If None the unique values in .W are used.

method : float,’linear’, ‘nearest’, ‘cubic’

Filling value for new points as float or order of interpolation between existing points. See griddata

fill_value

Value used to fill in for requested points outside of the convex hull of the input points. See griddata

Returns:
dataArray

Examples

The example repeats the griddata example

import jscatter as js
import numpy as np
import matplotlib.pyplot as pyplot
import matplotlib.tri as tri
def func(x, y):
    return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2

# create random points in [0,1]
xz = np.random.rand(1000, 2)
v = func(xz[:,0], xz[:,1])
# create dataArray
data=js.dA(np.stack([xz[:,0], xz[:,1],v],axis=0),XYeYeX=[0, 2, None, None, 1, None])
fig0=js.mpl.scatter3d(data.X,data.Z,data.Y)
fig0.suptitle('original')

newdata=data.regrid(np.r_[0:1:100j],np.r_[0:1:200j],method='cubic')
fig1=js.mpl.surface(newdata.X,newdata.Z,newdata.Y)
fig1.suptitle('cubic')
pyplot.show()
repeat(repeats, axis=None)

Repeat elements of an array.

Refer to numpy.repeat for full documentation.

See also

numpy.repeat
equivalent function
reshape(shape, order='C')

Returns an array containing the same data with a new shape.

Refer to numpy.reshape for full documentation.

See also

numpy.reshape
equivalent function
resize(new_shape, refcheck=True)

Change shape and size of array in-place.

Parameters:
new_shape : tuple of ints, or n ints

Shape of resized array.

refcheck : bool, optional

If False, reference count will not be checked. Default is True.

Returns:
None
Raises:
ValueError

If a does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist.

SystemError

If the order keyword argument is specified. This behaviour is a bug in NumPy.

See also

resize
Return a new array with the specified shape.

Notes

This reallocates space for the data area if necessary.

Only contiguous arrays (data elements consecutive in memory) can be resized.

The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False.

Examples

Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped:

>>> a = np.array([[0, 1], [2, 3]], order='C')
>>> a.resize((2, 1))
>>> a
array([[0],
       [1]])
>>> a = np.array([[0, 1], [2, 3]], order='F')
>>> a.resize((2, 1))
>>> a
array([[0],
       [2]])

Enlarging an array: as above, but missing entries are filled with zeros:

>>> b = np.array([[0, 1], [2, 3]])
>>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
>>> b
array([[0, 1, 2],
       [3, 0, 0]])

Referencing an array prevents resizing…

>>> c = a
>>> a.resize((1, 1))
Traceback (most recent call last):
...
ValueError: cannot resize an array that has been referenced ...

Unless refcheck is False:

>>> a.resize((1, 1), refcheck=False)
>>> a
array([[0]])
>>> c
array([[0]])
resumeAttrTxt(names=None, maxlength=None)

Resume attributes in text form.

A list with the first element of each attr is converted to string.

Parameters:
names : iterable

Names in attributes to use

maxlength : integer

Max length of string

Returns:
string
round(decimals=0, out=None)

Return a with each element rounded to the given number of decimals.

Refer to numpy.around for full documentation.

See also

numpy.around
equivalent function
save(name, fmt='%8.5e')

Saves data in ASCII text file (optional gzipped).

If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:
name : string, stringIO

Filename to write to or io.BytesIO.

fmt : string
Format specifier for float.
  • passed to numpy.savetext with example for ndarray part:
  • A single format (%10.5f), a sequence of formats or a multi-format string, e.g. ‘Iteration %d – %10.5f’, in which
  • case delimiter is ignored.

Notes

Format rules:

  • data table are separated by empty lines, parameters or comments

  • A dataset consists of a data table with optional parameters and comments.

    First two strings decide for a line :

    string + value -> parameter as parametername + list of values
    string + string -> comment line
    value + value -> data (line of an array; in sequence without break)
    single words -> are appended to comments
    optional:
    1string+@string-> as parameter but links to other dataArray with name @string

    (content of parameter with name 1string) stored in the same file after this dataset identified by parameter @name=1string

    • internal parameters starting with underscore (‘_’) are ignored for writing
    • also X,Y,Z,eX,eY,eZ
    • only ndarray content is stored; no dictionaries in parameters,
    • @name is used as identifier or filename can be accessed as name
savelastErrPlot(*args, **kwargs)[source]

Saves errplot to file with filename.

savetext(name, fmt='%8.5e')

Saves data in ASCII text file (optional gzipped).

If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:
name : string, stringIO

Filename to write to or io.BytesIO.

fmt : string
Format specifier for float.
  • passed to numpy.savetext with example for ndarray part:
  • A single format (%10.5f), a sequence of formats or a multi-format string, e.g. ‘Iteration %d – %10.5f’, in which
  • case delimiter is ignored.

Notes

Format rules:

  • data table are separated by empty lines, parameters or comments

  • A dataset consists of a data table with optional parameters and comments.

    First two strings decide for a line :

    string + value -> parameter as parametername + list of values
    string + string -> comment line
    value + value -> data (line of an array; in sequence without break)
    single words -> are appended to comments
    optional:
    1string+@string-> as parameter but links to other dataArray with name @string

    (content of parameter with name 1string) stored in the same file after this dataset identified by parameter @name=1string

    • internal parameters starting with underscore (‘_’) are ignored for writing
    • also X,Y,Z,eX,eY,eZ
    • only ndarray content is stored; no dictionaries in parameters,
    • @name is used as identifier or filename can be accessed as name
savetxt(name, fmt='%8.5e')

Saves data in ASCII text file (optional gzipped).

If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:
name : string, stringIO

Filename to write to or io.BytesIO.

fmt : string
Format specifier for float.
  • passed to numpy.savetext with example for ndarray part:
  • A single format (%10.5f), a sequence of formats or a multi-format string, e.g. ‘Iteration %d – %10.5f’, in which
  • case delimiter is ignored.

Notes

Format rules:

  • data table are separated by empty lines, parameters or comments

  • A dataset consists of a data table with optional parameters and comments.

    First two strings decide for a line :

    string + value -> parameter as parametername + list of values
    string + string -> comment line
    value + value -> data (line of an array; in sequence without break)
    single words -> are appended to comments
    optional:
    1string+@string-> as parameter but links to other dataArray with name @string

    (content of parameter with name 1string) stored in the same file after this dataset identified by parameter @name=1string

    • internal parameters starting with underscore (‘_’) are ignored for writing
    • also X,Y,Z,eX,eY,eZ
    • only ndarray content is stored; no dictionaries in parameters,
    • @name is used as identifier or filename can be accessed as name
searchsorted(v, side='left', sorter=None)

Find indices where elements of v should be inserted in a to maintain order.

For full documentation, see numpy.searchsorted

See also

numpy.searchsorted
equivalent function
setColumnIndex(*args, **kwargs)

Set the column index where to find X,Y,Z and and errors eY, eX, eZ…..

A list of all X in the dataArray is dataArray.X For array.ndim=1 -> ix=0 and others=None as default.

Parameters:
ix,iy,iey,iex,iz,iez,iw,iew : integer, None, default= 0,1,2,None,None,None,None,None
Set column index, where to find X, Y, eY.
  • Default from initialisation is ix,iy,iey,iex,iz,iez,iw,iew=0,1,2,None,None,None,None,None. (Usability wins iey=2!!)
  • If dataArray is given the ColumnIndex is copied, others are ignored.
  • If list [0,1,3] is given these are used as [ix,iy,iey,iex,iz,iez,iw,iew].

Remember that negative indices always are counted from back, which changes the column when adding a new column.

Notes

  • integer column index as 0,1,2,-1 , should be in range
  • None as not used eg iex=None -> no errors for x
  • anything else does not change
  • take care that -1 is always the last column
setConstrain(*args)

Set constrains for constrained minimization in fit.

Inequality constrains are accounted by an exterior penalty function increasing chi2. Equality constrains should be incorporated in the model function to reduce the number of parameters.

Parameters:
args : function or lambda function

Function that defines constrains by returning boolean with free and fixed parameters as input. The constrain function should return True in the accepted region and return False otherwise. Without function all constrains are removed.

Notes

See dataList

setLimit(*args, **kwargs)

Set upper and lower limits for parameters in least square fit.

See dataList.setlimit()

setattr(objekt, prepend='', keyadd='_')

Set (copy) attributes from objekt.

Parameters:
objekt : objekt or dictionary

Can be a dictionary of names:value pairs like {‘name’:[1,2,3,7,9]} If object is dataArray the attributes from dataArray.attr are copied

prepend : string, default ‘’

Prepend this string to all attribute names.

keyadd : char, default=’_’

If reserved attributes (T, mean, ..) are found the name is ‘T’+keyadd

setfield(val, dtype, offset=0)

Put a value into a specified place in a field defined by a data-type.

Place val into a’s field defined by dtype and beginning offset bytes into the field.

Parameters:
val : object

Value to be placed in field.

dtype : dtype object

Data-type of the field in which to place val.

offset : int, optional

The number of bytes into the field at which to place val.

Returns:
None

See also

getfield

Examples

>>> x = np.eye(3)
>>> x.getfield(np.float64)
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
>>> x.setfield(3, np.int32)
>>> x.getfield(np.int32)
array([[3, 3, 3],
       [3, 3, 3],
       [3, 3, 3]])
>>> x
array([[  1.00000000e+000,   1.48219694e-323,   1.48219694e-323],
       [  1.48219694e-323,   1.00000000e+000,   1.48219694e-323],
       [  1.48219694e-323,   1.48219694e-323,   1.00000000e+000]])
>>> x.setfield(np.eye(3), np.int32)
>>> x
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
setflags(write=None, align=None, uic=None)

Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.

These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The UPDATEIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.)

Parameters:
write : bool, optional

Describes whether or not a can be written to.

align : bool, optional

Describes whether or not a is aligned properly for its type.

uic : bool, optional

Describes whether or not a is a copy of another “base” array.

Notes

Array flags provide information about how the memory area used for the array is to be interpreted. There are 6 Boolean flags in use, only three of which can be changed by the user: UPDATEIFCOPY, WRITEABLE, and ALIGNED.

WRITEABLE (W) the data area can be written to;

ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler);

UPDATEIFCOPY (U) this array is a copy of some other array (referenced by .base). When this array is deallocated, the base array will be updated with the contents of this array.

All flags can be accessed using their first (upper case) letter as well as the full name.

Examples

>>> y
array([[3, 1, 7],
       [2, 0, 0],
       [8, 5, 9]])
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False
>>> y.setflags(write=0, align=0)
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : False
  ALIGNED : False
  UPDATEIFCOPY : False
>>> y.setflags(uic=1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: cannot set UPDATEIFCOPY flag to True
setlimit(*args, **kwargs)

Set upper and lower limits for parameters in least square fit.

See dataList.setlimit()

shape

Tuple of array dimensions.

Notes

May be used to “reshape” the array, as long as this would not require a change in the total number of elements

Examples

>>> x = np.array([1, 2, 3, 4])
>>> x.shape
(4,)
>>> y = np.zeros((2, 3, 4))
>>> y.shape
(2, 3, 4)
>>> y.shape = (3, 8)
>>> y
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> y.shape = (3, 6)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: total size of new array must be unchanged
showattr(maxlength=None, exclude=['comment'])

Show data specific attributes with values as overview.

Parameters:
maxlength : int

Truncate string representation after maxlength char

exclude : list of str

List of attr names to exclude from result

showlastErrPlot(*args, **kwargs)[source]

Shows last ErrPlot as created by makeErrPlot with last fit result.

Same arguments as in makeErrPlot.

Additional keyword arguments are passed as in modelValues and simulate changes in the parameters. Without parameters the last fit is retrieved.

size

Number of elements in the array.

Equivalent to np.prod(a.shape), i.e., the product of the array’s dimensions.

Examples

>>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30
sort(axis=-1, kind='quicksort', order=None)

Sort an array, in-place.

Parameters:
axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, optional

Sorting algorithm. Default is ‘quicksort’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.sort
Return a sorted copy of an array.
argsort
Indirect sort.
lexsort
Indirect stable sort on multiple keys.
searchsorted
Find elements in sorted array.
partition
Partial sort.

Notes

See sort for notes on the different sorting algorithms.

Examples

>>> a = np.array([[1,4], [3,1]])
>>> a.sort(axis=1)
>>> a
array([[1, 4],
       [1, 3]])
>>> a.sort(axis=0)
>>> a
array([[1, 3],
       [1, 4]])

Use the order keyword to specify a field to use when sorting a structured array:

>>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
>>> a.sort(order='y')
>>> a
array([('c', 1), ('a', 2)],
      dtype=[('x', '|S1'), ('y', '<i4')])
squeeze(axis=None)

Remove single-dimensional entries from the shape of a.

Refer to numpy.squeeze for full documentation.

See also

numpy.squeeze
equivalent function
std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the standard deviation of the array elements along given axis.

Refer to numpy.std for full documentation.

See also

numpy.std
equivalent function
strides

Tuple of bytes to step in each dimension when traversing an array.

The byte offset of element (i[0], i[1], ..., i[n]) in an array a is:

offset = sum(np.array(i) * a.strides)

A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide.

See also

numpy.lib.stride_tricks.as_strided

Notes

Imagine an array of 32-bit integers (each 4 bytes):

x = np.array([[0, 1, 2, 3, 4],
              [5, 6, 7, 8, 9]], dtype=np.int32)

This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4).

Examples

>>> y = np.reshape(np.arange(2*3*4), (2,3,4))
>>> y
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],
       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])
>>> y.strides
(48, 16, 4)
>>> y[1,1,1]
17
>>> offset=sum(y.strides * np.array((1,1,1)))
>>> offset/y.itemsize
17
>>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
>>> x.strides
(32, 4, 224, 1344)
>>> i = np.array([3,5,2,2])
>>> offset = sum(i * x.strides)
>>> x[3,5,2,2]
813
>>> offset / x.itemsize
813
sum(axis=None, dtype=None, out=None, keepdims=False)

Return the sum of the array elements over the given axis.

Refer to numpy.sum for full documentation.

See also

numpy.sum
equivalent function
swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

Refer to numpy.swapaxes for full documentation.

See also

numpy.swapaxes
equivalent function
take(indices, axis=None, out=None, mode='raise')

Return an array formed from the elements of a at the given indices.

Refer to numpy.take for full documentation.

See also

numpy.take
equivalent function
tobytes(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

New in version 1.9.0.

Parameters:
order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:
s : bytes

Python bytes exhibiting a copy of a’s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
tofile(fid, sep="", format="%s")

Write array to a file as text or binary (default).

Data is always written in ‘C’ order, independent of the order of a. The data produced by this method can be recovered using the function fromfile().

Parameters:
fid : file or str

An open file object, or a string containing a filename.

sep : str

Separator between array items for text output. If “” (empty), a binary file is written, equivalent to file.write(a.tobytes()).

format : str

Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item.

Notes

This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size.

tolist()

Return the array as a (possibly nested) list.

Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.

Parameters:
none
Returns:
y : list

The possibly nested list of array elements.

Notes

The array may be recreated, a = np.array(a.tolist()).

Examples

>>> a = np.array([1, 2])
>>> a.tolist()
[1, 2]
>>> a = np.array([[1, 2], [3, 4]])
>>> list(a)
[array([1, 2]), array([3, 4])]
>>> a.tolist()
[[1, 2], [3, 4]]
tostring(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.

Parameters:
order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:
s : bytes

Python bytes exhibiting a copy of a’s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)

Return the sum along diagonals of the array.

Refer to numpy.trace for full documentation.

See also

numpy.trace
equivalent function
transpose(*axes)

Returns a view of the array with axes transposed.

For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).

Parameters:
axes : None, tuple of ints, or n ints
  • None or no argument: reverses the order of the axes.
  • tuple of ints: i in the j-th place in the tuple means a’s i-th axis becomes a.transpose()’s j-th axis.
  • n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form)
Returns:
out : ndarray

View of a, with axes suitably permuted.

See also

ndarray.T
Array property returning the array transposed.

Examples

>>> a = np.array([[1, 2], [3, 4]])
>>> a
array([[1, 2],
       [3, 4]])
>>> a.transpose()
array([[1, 3],
       [2, 4]])
>>> a.transpose((1, 0))
array([[1, 3],
       [2, 4]])
>>> a.transpose(1, 0)
array([[1, 3],
       [2, 4]])
var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the variance of the array elements, along given axis.

Refer to numpy.var for full documentation.

See also

numpy.var
equivalent function
view(dtype=None, type=None)

New view of array with the same data.

Parameters:
dtype : data-type or ndarray sub-class, optional

Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter).

type : Python type, optional

Type of the returned view, e.g., ndarray or matrix. Again, the default None results in type preservation.

Notes

a.view() is used two different ways:

a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory.

a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory.

For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.

Examples

>>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])

Viewing array data using a different type and dtype:

>>> y = x.view(dtype=np.int16, type=np.matrix)
>>> y
matrix([[513]], dtype=int16)
>>> print(type(y))
<class 'numpy.matrixlib.defmatrix.matrix'>

Creating a view on a structured array so it can be used in calculations

>>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
>>> xv = x.view(dtype=np.int8).reshape(-1,2)
>>> xv
array([[1, 2],
       [3, 4]], dtype=int8)
>>> xv.mean(0)
array([ 2.,  3.])

Making changes to the view changes the underlying array

>>> xv[0,1] = 20
>>> print(x)
[(1, 20) (3, 4)]

Using a view to convert an array to a recarray:

>>> z = x.view(np.recarray)
>>> z.a
array([1], dtype=int8)

Views share data:

>>> x[0] = (9, 10)
>>> z[0]
(9, 10)

Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.:

>>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
>>> y = x[:, 0:2]
>>> y
array([[1, 2],
       [4, 5]], dtype=int16)
>>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: new type not compatible with array.
>>> z = y.copy()
>>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
array([[(1, 2)],
       [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
where(condition)

Copy with lines where condition is fulfilled.

Parameters:
condition : function

Function returning bool

Examples

data.where(lambda a:a.X>1)
data.where(lambda a:(a.X**2>1) & (a.Y>0.05)  )
dataarray.zeros(**kwargs)

dataArray filled with zeros.

Parameters:
shape : integer or tuple of integer

Shape of the new array, e.g., (2, 3) or 2.

Returns:
dataArray

Examples

js.zeros((3,20))
dataarray.ones(**kwargs)

dataArray filled with ones.

Parameters:
shape : integer or tuple of integer

Shape of the new array, e.g., (2, 3) or 2.

Returns:
dataArray

Examples

js.ones((3,20))
dataarray.fromFunction(X, *args, **kwargs)

Evaluation of Y=function(X) for all X and returns a dataArray with X,Y

Parameters:
function : function or lambda

function to evaluate with first argument as X[i] result is flattened (to be one dimensional)

X : array N x M

X array function is evaluated along first dimension (N) e.g np.linspace or np.logspace

*args,**kwargs : arguments passed to function
Returns:
dataArray with N x ndim(X)+ndim(function(X))

Examples

import jscatter as js
result=js.fromFunction(lambda x,n:[1,x,x**(2*n),x**(3*n)],np.linspace(1,50),2)
#
X=(np.linspace(0,30).repeat(3).reshape(-1,3)*np.r_[1,2,3])
result=js.fromFunction(lambda x:[1,x[0],x[1]**2,x[2]**3],X)
#
ff=lambda x,n,m:[1,x[0],x[1]**(2*n),x[2]**(3*m)]
X=(np.linspace(0,30).repeat(3).reshape(-1,3)*np.r_[1,2,3])
result1=js.fromFunction(ff,X,3,2)
result2=js.fromFunction(ff,X,m=3,n=2)
result1.showattr()
result2.showattr()