a :jg@sdZddlZddlZddlZddlZddlZddlZddlZddlZddlm Z m Z m Z ddl mZddlZddlZddlmZddlmZddlmZdd lmZmZdd lmZdd lmZmZdd l m!Z!m"Z"m#Z#m$Z$m%Z%m&Z&m'Z'm(Z(m)Z)m*Z*m+Z+dd l,m-Z-m.Z.gdZ/ej0ej1ddZ1GdddZ2ddZ3edGdddeZ4eddUej5dddZ6dVddZ7e1e7dWd d!Z8d"d#Z9e1e9d$d%Z:d&d'Z;e1e;d(d)Ze?d.d/d0Z@d1ZAdYd3d4ZBd5d6ZCd1ZAd7d8d9d:ddddddejDdd; dd?d@ZGe1eGZHdZdAdBZIe1eId[dHdIZJedd\dJdKZKeedeFd8ddddddddddFLeMe"jNdLdddMddddddfdddNdOdPZOe1eOZPdQdRZQdSdTZRdS)]z IO related functions. N) itemgetterindex methodcaller)Mapping)format) DataSource) overrides)packbits unpackbits)_load_from_filelike)set_array_function_like_doc set_module) LineSplitter NameValidatorStringConverterConverterErrorConverterLockErrorConversionWarning_is_string_likehas_nested_fields flatten_dtype easy_dtype _decode_line) asunicodeasbytes) savetxtloadtxt genfromtxtloadsavesavezsavez_compressedr r fromregexnumpy)modulec@s(eZdZdZddZddZddZdS) BagObjas BagObj(obj) Convert attribute look-ups to getitems on the object passed in. Parameters ---------- obj : class instance Object on which attribute look-up is performed. Examples -------- >>> from numpy.lib._npyio_impl import BagObj as BO >>> class BagDemo: ... def __getitem__(self, key): # An instance of BagObj(BagDemo) ... # will call this method when any ... # attribute look-up is required ... result = "Doesn't matter what you want, " ... return result + "you're gonna get this" ... >>> demo_obj = BagDemo() >>> bagobj = BO(demo_obj) >>> bagobj.hello_there "Doesn't matter what you want, you're gonna get this" >>> bagobj.I_can_be_anything "Doesn't matter what you want, you're gonna get this" cCst||_dSN)weakrefproxy_obj)selfobjr-?/usr/local/lib/python3.9/site-packages/numpy/lib/_npyio_impl.py__init__GszBagObj.__init__cCs4zt|d|WSty.t|dYn0dS)Nr*)object__getattribute__KeyErrorAttributeErrorr+keyr-r-r.r1Ks zBagObj.__getattribute__cCstt|dS)z Enables dir(bagobj) to list the files in an NpzFile. This also enables tab-completion in an interpreter or IPython. r*)listr0r1keysr+r-r-r.__dir__QszBagObj.__dir__N)__name__ __module__ __qualname____doc__r/r1r9r-r-r-r.r&)sr&cOs<t|dst|}ddl}d|d<|j|g|Ri|S)z Create a ZipFile. Allows for Zip64, and the `file` argument can accept file, str, or pathlib.Path objects. `args` and `kwargs` are passed to the zipfile.ZipFile constructor. readrNT allowZip64)hasattrosfspathzipfileZipFile)fileargskwargsrCr-r-r.zipfile_factoryZs   rHznumpy.lib.npyioc@seZdZdZdZdZdZd"ejdddZ dd Z d d Z d d Z ddZ ddZddZddZddZddZd#ddZddZddZd d!ZdS)$NpzFilea< NpzFile(fid) A dictionary-like object with lazy-loading of files in the zipped archive provided on construction. `NpzFile` is used to load files in the NumPy ``.npz`` data archive format. It assumes that files in the archive have a ``.npy`` extension, other files are ignored. The arrays and file strings are lazily loaded on either getitem access using ``obj['key']`` or attribute lookup using ``obj.f.key``. A list of all files (without ``.npy`` extensions) can be obtained with ``obj.files`` and the ZipFile object itself using ``obj.zip``. Attributes ---------- files : list of str List of all files in the archive with a ``.npy`` extension. zip : ZipFile instance The ZipFile object initialized with the zipped archive. f : BagObj instance An object on which attribute can be performed as an alternative to getitem access on the `NpzFile` instance itself. allow_pickle : bool, optional Allow loading pickled data. Default: False .. versionchanged:: 1.16.3 Made default False in response to CVE-2019-6446. pickle_kwargs : dict, optional Additional keyword arguments to pass on to pickle.load. These are only useful when loading object arrays saved on Python 2 when using Python 3. max_header_size : int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See :py:func:`ast.literal_eval()` for details. This option is ignored when `allow_pickle` is passed. In that case the file is by definition trusted and the limit is unnecessary. Parameters ---------- fid : file, str, or pathlib.Path The zipped archive to open. This is either a file-like object or a string containing the path to the archive. own_fid : bool, optional Whether NpzFile should close the file handle. Requires that `fid` is a file-like object. Examples -------- >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> y = np.sin(x) >>> np.savez(outfile, x=x, y=y) >>> _ = outfile.seek(0) >>> npz = np.load(outfile) >>> isinstance(npz, np.lib.npyio.NpzFile) True >>> npz NpzFile 'object' with keys: x, y >>> sorted(npz.files) ['x', 'y'] >>> npz['x'] # getitem access array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> npz.f.x # attribute lookup array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) NFmax_header_sizecCst|}||_g|_||_||_||_|jD]0}|drT|j|ddq0|j|q0||_ t ||_ |r|||_ dS)N.npy) rHnamelist_filesfiles allow_picklerL pickle_kwargsendswithappendzipr&ffid)r+rXown_fidrRrSrL_zipxr-r-r.r/s    zNpzFile.__init__cCs|Sr'r-r8r-r-r. __enter__szNpzFile.__enter__cCs |dSr'close)r+exc_type exc_value tracebackr-r-r.__exit__szNpzFile.__exit__cCs>|jdur|jd|_|jdur4|jd|_d|_dS)z" Close the file. N)rVr^rXrWr8r-r-r.r^s    z NpzFile.closecCs |dSr'r]r8r-r-r.__del__szNpzFile.__del__cCs t|jSr')iterrQr8r-r-r.__iter__szNpzFile.__iter__cCs t|jSr')lenrQr8r-r-r.__len__szNpzFile.__len__cCsd}||jvrd}n||jvr*d}|d7}|r|j|}|ttj}||tjkr|j|}tj ||j |j |j dS|j|Snt |ddS)NFTrMrRrSrLz is not a file in the archive)rPrQrVopenr>rfr MAGIC_PREFIXr^ read_arrayrRrSrLr2)r+r5memberbytesmagicr-r-r. __getitem__s&      zNpzFile.__getitem__cCs||jvp||jvSr')rPrQr4r-r-r. __contains__ szNpzFile.__contains__cCs`t|jtr|j}nt|jdd}d|jd|j}t|j|jkrP|d7}d|d|S)Nnamer0z, z...zNpzFile z with keys: ) isinstancerXstrgetattrjoinrQ_MAX_REPR_ARRAY_COUNTrf)r+filenameZ array_namesr-r-r.__repr__ s zNpzFile.__repr__cCst|||S)zT D.get(k,[,d]) returns D[k] if k in D, else d. d defaults to None. )rget)r+r5defaultr-r-r.rysz NpzFile.getcCs t|S)zS D.items() returns a set-like object providing a view on the items )ritemsr8r-r-r.r{#sz NpzFile.itemscCs t|S)zQ D.keys() returns a set-like object providing a view on the keys )rr7r8r-r-r.r7)sz NpzFile.keyscCs t|S)zU D.values() returns a set-like object providing a view on the values )rvaluesr8r-r-r.r|/szNpzFile.values)FFN)N)r:r;r<r=rVrXrvr_MAX_HEADER_SIZEr/r\rbr^rcrergrorprxryr{r7r|r-r-r-r.rIis,J   rIFTASCIIrKc Cs|dvrtdt||d}t}t|dr<|}d} n|tt|d}d} d} d } t t j } | | } | s~t d |t| t |  d | | s| | r|t|| |||d }|Wd S| t j kr0|r|rd}t j|||dWd St j||||dWd Snl|s>tdz"tj|fi|WWd Sty}z td|d|WYd }~n d }~00Wd n1s0Yd S)a0 Load arrays or pickled objects from ``.npy``, ``.npz`` or pickled files. .. warning:: Loading files that contain object arrays uses the ``pickle`` module, which is not secure against erroneous or maliciously constructed data. Consider passing ``allow_pickle=False`` to load data that is known not to contain object arrays for the safer handling of untrusted sources. Parameters ---------- file : file-like object, string, or pathlib.Path The file to read. File-like objects must support the ``seek()`` and ``read()`` methods and must always be opened in binary mode. Pickled files require that the file-like object support the ``readline()`` method as well. mmap_mode : {None, 'r+', 'r', 'w+', 'c'}, optional If not None, then memory-map the file, using the given mode (see `numpy.memmap` for a detailed description of the modes). A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory. allow_pickle : bool, optional Allow loading pickled object arrays stored in npy files. Reasons for disallowing pickles include security, as loading pickled data can execute arbitrary code. If pickles are disallowed, loading object arrays will fail. Default: False .. versionchanged:: 1.16.3 Made default False in response to CVE-2019-6446. fix_imports : bool, optional Only useful when loading Python 2 generated pickled files on Python 3, which includes npy/npz files containing object arrays. If `fix_imports` is True, pickle will try to map the old Python 2 names to the new names used in Python 3. encoding : str, optional What encoding to use when reading Python 2 strings. Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays. Values other than 'latin1', 'ASCII', and 'bytes' are not allowed, as they can corrupt numerical data. Default: 'ASCII' max_header_size : int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See :py:func:`ast.literal_eval()` for details. This option is ignored when `allow_pickle` is passed. In that case the file is by definition trusted and the limit is unnecessary. Returns ------- result : array, tuple, dict, etc. Data stored in the file. For ``.npz`` files, the returned instance of NpzFile class must be closed to avoid leaking file descriptors. Raises ------ OSError If the input file does not exist or cannot be read. UnpicklingError If ``allow_pickle=True``, but the file cannot be loaded as a pickle. ValueError The file contains an object array, but ``allow_pickle=False`` given. EOFError When calling ``np.load`` multiple times on the same file handle, if all data has already been read See Also -------- save, savez, savez_compressed, loadtxt memmap : Create a memory-map to an array stored in a file on disk. lib.format.open_memmap : Create or load a memory-mapped ``.npy`` file. Notes ----- - If the file contains pickle data, then whatever object is stored in the pickle is returned. - If the file is a ``.npy`` file, then a single array is returned. - If the file is a ``.npz`` file, then a dictionary-like object is returned, containing ``{filename: array}`` key-value pairs, one for each file in the archive. - If the file is a ``.npz`` file, the returned value supports the context manager protocol in a similar fashion to the open function:: with load('foo.npz') as data: a = data['a'] The underlying file descriptor is closed when exiting the 'with' block. Examples -------- Store data to disk, and load it again: >>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]])) >>> np.load('/tmp/123.npy') array([[1, 2, 3], [4, 5, 6]]) Store compressed data to disk, and load it again: >>> a=np.array([[1, 2, 3], [4, 5, 6]]) >>> b=np.array([1, 2]) >>> np.savez('/tmp/123.npz', a=a, b=b) >>> data = np.load('/tmp/123.npz') >>> data['a'] array([[1, 2, 3], [4, 5, 6]]) >>> data['b'] array([1, 2]) >>> data.close() Mem-map the stored array, and then access the second row directly from disk: >>> X = np.load('/tmp/123.npy', mmap_mode='r') >>> X[1, :] memmap([4, 5, 6]) )r~latin1rmz.encoding must be 'ASCII', 'latin1', or 'bytes')encoding fix_importsr>FrbTsPKsPKzNo data left in filer)rYrRrSrLNl)moderLrhz@Cannot load file containing pickled data when allow_pickle=FalsezFailed to interpret file z as a pickle) ValueErrordict contextlib ExitStackr@ enter_contextrirArBrfrrjr>EOFErrorseekmin startswithpop_allrIZ open_memmaprkpickler ExceptionUnpicklingError)rEZ mmap_moderRrrrLrSstackrXrYZ _ZIP_PREFIXZ _ZIP_SUFFIXNrnreter-r-r.r6sX|       " rcCs|fSr'r-)rEarrrRrr-r-r._save_dispatchersrcCst|drt|}n&t|}|ds2|d}t|d}|2}t|}t j |||t |ddWdn1sx0YdS)a= Save an array to a binary file in NumPy ``.npy`` format. Parameters ---------- file : file, str, or pathlib.Path File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a ``.npy`` extension will be appended to the filename if it does not already have one. arr : array_like Array data to be saved. allow_pickle : bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True fix_imports : bool, optional Only useful in forcing objects in object arrays on Python 3 to be pickled in a Python 2 compatible way. If `fix_imports` is True, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. See Also -------- savez : Save several arrays into a ``.npz`` archive savetxt, load Notes ----- For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`. Any data saved to the file is appended to the end of the file. Examples -------- >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> np.save(outfile, x) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> np.load(outfile) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> with open('test.npy', 'wb') as f: ... np.save(f, np.array([1, 2])) ... np.save(f, np.array([1, 3])) >>> with open('test.npy', 'rb') as f: ... a = np.load(f) ... b = np.load(f) >>> print(a, b) # [1 2] [1 3] writerMwb)rrRrSN) r@r nullcontextrArBrTrinp asanyarrayr write_arrayr)rErrRrZfile_ctxrXr-r-r.r s=       r cos|EdH|EdHdSr'r|rErFkwdsr-r-r._savez_dispatcherBs rcOst|||ddS)a Save several arrays into a single file in uncompressed ``.npz`` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: ``savez(fn, x=x, y=y)``. If arrays are specified as positional arguments, i.e., ``savez(fn, x, y)``, their names will be `arr_0`, `arr_1`, etc. Parameters ---------- file : file, str, or pathlib.Path Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the ``.npz`` extension will be appended to the filename if it is not already there. args : Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named "arr_0", "arr_1", and so on. kwds : Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns ------- None See Also -------- save : Save a single array to a binary file in NumPy format. savetxt : Save an array to a file as plain text. savez_compressed : Save several arrays into a compressed ``.npz`` archive Notes ----- The ``.npz`` file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in ``.npy`` format. For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`. When opening the saved ``.npz`` file with `load` a `~lib.npyio.NpzFile` object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the ``.files`` attribute), and for the arrays themselves. Keys passed in `kwds` are used as filenames inside the ZIP archive. Therefore, keys should be valid filenames; e.g., avoid keys that begin with ``/`` or contain ``.``. When naming variables with keyword arguments, it is not possible to name a variable ``file``, as this would cause the ``file`` argument to be defined twice in the call to ``savez``. Examples -------- >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> y = np.sin(x) Using `savez` with \*args, the arrays are saved with default names. >>> np.savez(outfile, x, y) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> npzfile = np.load(outfile) >>> npzfile.files ['arr_0', 'arr_1'] >>> npzfile['arr_0'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Using `savez` with \**kwds, the arrays are saved with the keyword names. >>> outfile = TemporaryFile() >>> np.savez(outfile, x=x, y=y) >>> _ = outfile.seek(0) >>> npzfile = np.load(outfile) >>> sorted(npzfile.files) ['x', 'y'] >>> npzfile['x'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) FN_savezrr-r-r.r!GsTr!cos|EdH|EdHdSr'rrr-r-r._savez_compressed_dispatchers rcOst|||ddS)a Save several arrays into a single file in compressed ``.npz`` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: ``savez_compressed(fn, x=x, y=y)``. If arrays are specified as positional arguments, i.e., ``savez_compressed(fn, x, y)``, their names will be `arr_0`, `arr_1`, etc. Parameters ---------- file : file, str, or pathlib.Path Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the ``.npz`` extension will be appended to the filename if it is not already there. args : Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named "arr_0", "arr_1", and so on. kwds : Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns ------- None See Also -------- numpy.save : Save a single array to a binary file in NumPy format. numpy.savetxt : Save an array to a file as plain text. numpy.savez : Save several arrays into an uncompressed ``.npz`` file format numpy.load : Load the files created by savez_compressed. Notes ----- The ``.npz`` file format is a zipped archive of files named after the variables they contain. The archive is compressed with ``zipfile.ZIP_DEFLATED`` and each file in the archive contains one variable in ``.npy`` format. For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`. When opening the saved ``.npz`` file with `load` a `~lib.npyio.NpzFile` object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the ``.files`` attribute), and for the arrays themselves. Examples -------- >>> test_array = np.random.rand(3, 2) >>> test_vector = np.random.rand(4) >>> np.savez_compressed('/tmp/123', a=test_array, b=test_vector) >>> loaded = np.load('/tmp/123.npz') >>> print(np.array_equal(test_array, loaded['a'])) True >>> print(np.array_equal(test_vector, loaded['b'])) True TNrrr-r-r.r"s?r"c Csddl}t|ds.t|}|ds.|d}|}t|D]0\}} d|} | |vrbtd| | || <q:|rx|j} n|j } t |d| d} | D]\\} } | d} t | } | j| dd d "}tj|| ||d Wdq1s0Yq| dS) Nrrz.npzzarr_%dz,Cannot use un-named variables and keyword %sw)r compressionrMT) force_zip64r)rCr@rArBrT enumerater7r ZIP_DEFLATED ZIP_STOREDrHr{rrrirrr^)rErFrcompressrRrSrCZnamedictivalr5rZzipffnamerXr-r-r.rs4      &rcCs|dvrtd|dS)zJust checks if the param ndmin is supported on _ensure_ndmin_ndarray. It is intended to be used as verification before running anything expensive. e.g. loadtxt, genfromtxt )rrz Illegal value of ndmin keyword: N)rndminr-r-r.!_ensure_ndmin_ndarray_check_param srrcCsJ|j|krt|}|j|krF|dkr2t|}n|dkrFt|j}|S)aThis is a helper function of loadtxt and genfromtxt to ensure proper minimum dimension as requested ndim : int. Supported values 1, 2, 3 ^^ whenever this changes, keep in sync with _ensure_ndmin_ndarray_check_param rr)ndimrZsqueezeZ atleast_1d atleast_2dT)arr-r-r._ensure_ndmin_ndarrays     riPargumentcCsLzt|Wn"ty0t|ddYn0|dkrHt|ddS)Nz must be an integerrz must be nonnegative)operatorr TypeErrorr)valuerqr-r-r._check_nonneg_int/s  rccsB|D]8}t|tr||}|D]}||dd}q |VqdS)a Generator that consumes a line iterated iterable and strips out the multiple (or multi-character) comments from lines. This is a pre-processing step to achieve feature parity with loadtxt (we assume that this feature is a nieche feature). rrN)rrrmdecodesplit)iterablecommentsrlinecr-r-r._preprocess_comments8s   r,#"j) delimitercommentquoteimaginary_unitusecols skiplinesmax_rows convertersrunpackdtyperc sHd} | dkrd} d} | dur$tdt| } d}| jdvrj| dks\| dks\| d ks\| d krj| }tt} |durz t|}Wnty|g}Yn0t| |durd}nd |vrtd t|}d}t |d krd}n^t |dkrt |d t r:t |d dkr:|d }d}n ||vr:td|d|d|durV|durVtdt |dkrltdt ||durt |nd}t }d}zt |tjrt|}t |t rtjjj|d| d}| durt|dd} t |}|}d}n| durt|dd} t|}Wn<tyX}z"tdt|d|WYd}~n d}~00|^|dur|r~t|}d}t||| }|durt|||||||||| | || d n|rt|}d}d}|dkrd}g}|d krd|d krt}n tt|}t|||||||||| | || |d}|||d }|d krN||8}t ||krqdqt |dkrt |dd kr|d=t |dkr|d ntj|d dWdn1s0Yt | dj!r j!d d kr t"j#d |d!t$d"d#| r@j}|j%dur8fd$d%|j%DSj&SnSdS)&a Read a NumPy array from a text file. This is a helper function for loadtxt. Parameters ---------- fname : file, str, or pathlib.Path The filename or the file to be read. delimiter : str, optional Field delimiter of the fields in line of the file. Default is a comma, ','. If None any sequence of whitespace is considered a delimiter. comment : str or sequence of str or None, optional Character that begins a comment. All text from the comment character to the end of the line is ignored. Multiple comments or multiple-character comment strings are supported, but may be slower and `quote` must be empty if used. Use None to disable all use of comments. quote : str or None, optional Character that is used to quote string fields. Default is '"' (a double quote). Use None to disable quote support. imaginary_unit : str, optional Character that represent the imaginary unit `sqrt(-1)`. Default is 'j'. usecols : array_like, optional A one-dimensional array of integer column numbers. These are the columns from the file to be included in the array. If this value is not given, all the columns are used. skiplines : int, optional Number of lines to skip before interpreting the data in the file. max_rows : int, optional Maximum number of rows of data to read. Default is to read the entire file. converters : dict or callable, optional A function to parse all columns strings into the desired value, or a dictionary mapping column number to a parser function. E.g. if column 0 is a date string: ``converters = {0: datestr2num}``. Converters can also be used to provide a default value for missing data, e.g. ``converters = lambda s: float(s.strip() or 0)`` will convert empty fields to 0. Default: None ndmin : int, optional Minimum dimension of the array returned. Allowed values are 0, 1 or 2. Default is 0. unpack : bool, optional If True, the returned array is transposed, so that arguments may be unpacked using ``x, y, z = read(...)``. When used with a structured data-type, arrays are returned for each field. Default is False. dtype : numpy data type A NumPy dtype instance, can be a structured dtype to map to the columns of the file. encoding : str, optional Encoding used to decode the inputfile. The special value 'bytes' (the default) enables backwards-compatible behavior for `converters`, ensuring that inputs to the converter functions are encoded bytes objects. The special value 'bytes' has no additional effect if ``converters=None``. If encoding is ``'bytes'`` or ``None``, the default system encoding is used. Returns ------- ndarray NumPy array. FrmNTza dtype must be provided.ZSUMZS0ZU0ZM8Zm8zJcomments cannot be an empty string. Use comments=None to disable comments.rrzComment characters 'z ' cannot include the delimiter ''zwhen multiple comments or a multi-character comment is given, quotes are not supported. In this case quotechar must be set to None.zlen(imaginary_unit) must be 1.rtrrrzGfname must be a string, filehandle, list of strings, or generator. Got instead.) rrrrrrrrrrfilelikebyte_convertersS) rrrrrrrrrrrrc_byte_converters)Zaxisrz#loadtxt: input contained no data: "r)category stacklevelcsg|] }|qSr-r-.0fieldrr-r. Iz_read..)'rrrkindr0r6rrtuplerfrrrsrrrrAPathLikerBlib _datasourcerirtclosingrdtyperr _loadtxt_chunksizerrUZastypeZ concatenatershapewarningswarn UserWarningnamesr)rrrrrrrrrrrrrrZread_dtype_via_object_chunksrZfh_closing_ctxrfhdatarrchunks chunk_sizeZnext_arrskiprowsdtr-rr._readNsE       "                         .   r) quotecharlikec Cs| dur(t| |||||||||| | d St|tr<|d|durJtj}|} | durxt| ttfrj| g} dd| D} t|tr|d}t||| ||||||| | | d }|S)a' Load data from a text file. Parameters ---------- fname : file, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is ``.gz`` or ``.bz2``, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. dtype : data-type, optional Data-type of the resulting array; default: float. If this is a structured data-type, the resulting array will be 1-dimensional, and each row will be interpreted as an element of the array. In this case, the number of columns used must match the number of fields in the data-type. comments : str or sequence of str or None, optional The characters or list of characters used to indicate the start of a comment. None implies no comments. For backwards compatibility, byte strings will be decoded as 'latin1'. The default is '#'. delimiter : str, optional The character used to separate the values. For backwards compatibility, byte strings will be decoded as 'latin1'. The default is whitespace. .. versionchanged:: 1.23.0 Only single character delimiters are supported. Newline characters cannot be used as the delimiter. converters : dict or callable, optional Converter functions to customize value parsing. If `converters` is callable, the function is applied to all columns, else it must be a dict that maps column number to a parser function. See examples for further details. Default: None. .. versionchanged:: 1.23.0 The ability to pass a single callable to be applied to all columns was added. skiprows : int, optional Skip the first `skiprows` lines, including comments; default: 0. usecols : int or sequence, optional Which columns to read, with 0 being the first. For example, ``usecols = (1,4,5)`` will extract the 2nd, 5th and 6th columns. The default, None, results in all columns being read. .. versionchanged:: 1.11.0 When a single column has to be read it is possible to use an integer instead of a tuple. E.g ``usecols = 3`` reads the fourth column the same way as ``usecols = (3,)`` would. unpack : bool, optional If True, the returned array is transposed, so that arguments may be unpacked using ``x, y, z = loadtxt(...)``. When used with a structured data-type, arrays are returned for each field. Default is False. ndmin : int, optional The returned array will have at least `ndmin` dimensions. Otherwise mono-dimensional axes will be squeezed. Legal values: 0 (default), 1 or 2. .. versionadded:: 1.6.0 encoding : str, optional Encoding used to decode the inputfile. Does not apply to input streams. The special value 'bytes' enables backward compatibility workarounds that ensures you receive byte arrays as results if possible and passes 'latin1' encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is 'bytes'. .. versionadded:: 1.14.0 .. versionchanged:: 2.0 Before NumPy 2, the default was ``'bytes'`` for Python 2 compatibility. The default is now ``None``. max_rows : int, optional Read `max_rows` rows of content after `skiprows` lines. The default is to read all the rows. Note that empty rows containing no data such as empty lines and comment lines are not counted towards `max_rows`, while such lines are counted in `skiprows`. .. versionadded:: 1.16.0 .. versionchanged:: 1.23.0 Lines containing no data, including comment lines (e.g., lines starting with '#' or as specified via `comments`) are not counted towards `max_rows`. quotechar : unicode character or None, optional The character used to denote the start and end of a quoted item. Occurrences of the delimiter or comment characters are ignored within a quoted item. The default value is ``quotechar=None``, which means quoting support is disabled. If two consecutive instances of `quotechar` are found within a quoted field, the first is treated as an escape character. See examples. .. versionadded:: 1.23.0 ${ARRAY_FUNCTION_LIKE} .. versionadded:: 1.20.0 Returns ------- out : ndarray Data read from the text file. See Also -------- load, fromstring, fromregex genfromtxt : Load data with missing values handled as specified. scipy.io.loadmat : reads MATLAB data files Notes ----- This function aims to be a fast reader for simply formatted files. The `genfromtxt` function provides more sophisticated handling of, e.g., lines with missing values. Each row in the input text file must have the same number of values to be able to read all values. If all rows do not have same number of values, a subset of up to n columns (where n is the least number of values present in all rows) can be read by specifying the columns via `usecols`. .. versionadded:: 1.10.0 The strings produced by the Python float.hex method can be used as input for floats. Examples -------- >>> from io import StringIO # StringIO behaves like a file object >>> c = StringIO("0 1\n2 3") >>> np.loadtxt(c) array([[0., 1.], [2., 3.]]) >>> d = StringIO("M 21 72\nF 35 58") >>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'), ... 'formats': ('S1', 'i4', 'f4')}) array([(b'M', 21, 72.), (b'F', 35, 58.)], dtype=[('gender', 'S1'), ('age', '>> c = StringIO("1,0,2\n3,0,4") >>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True) >>> x array([1., 3.]) >>> y array([2., 4.]) The `converters` argument is used to specify functions to preprocess the text prior to parsing. `converters` can be a dictionary that maps preprocessing functions to each column: >>> s = StringIO("1.618, 2.296\n3.141, 4.669\n") >>> conv = { ... 0: lambda x: np.floor(float(x)), # conversion fn for column 0 ... 1: lambda x: np.ceil(float(x)), # conversion fn for column 1 ... } >>> np.loadtxt(s, delimiter=",", converters=conv) array([[1., 3.], [3., 5.]]) `converters` can be a callable instead of a dictionary, in which case it is applied to all columns: >>> s = StringIO("0xDE 0xAD\n0xC0 0xDE") >>> import functools >>> conv = functools.partial(int, base=16) >>> np.loadtxt(s, converters=conv) array([[222., 173.], [192., 222.]]) This example shows how `converters` can be used to convert a field with a trailing minus sign into a negative number. >>> s = StringIO("10.01 31.25-\n19.22 64.31\n17.57- 63.94") >>> def conv(fld): ... return -float(fld[:-1]) if fld.endswith("-") else float(fld) ... >>> np.loadtxt(s, converters=conv) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) Using a callable as the converter can be particularly useful for handling values with different formatting, e.g. floats with underscores: >>> s = StringIO("1 2.7 100_000") >>> np.loadtxt(s, converters=float) array([1.e+00, 2.7e+00, 1.e+05]) This idea can be extended to automatically handle values specified in many different formats, such as hex values: >>> def conv(val): ... try: ... return float(val) ... except ValueError: ... return float.fromhex(val) >>> s = StringIO("1, 2.5, 3_000, 0b4, 0x1.4000000000000p+2") >>> np.loadtxt(s, delimiter=",", converters=conv) array([1.0e+00, 2.5e+00, 3.0e+03, 1.8e+02, 5.0e+00]) Or a format where the ``-`` sign comes after the number: >>> s = StringIO("10.01 31.25-\n19.22 64.31\n17.57- 63.94") >>> conv = lambda x: -float(x[:-1]) if x.endswith("-") else float(x) >>> np.loadtxt(s, converters=conv) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) Support for quoted fields is enabled with the `quotechar` parameter. Comment and delimiter characters are ignored when they appear within a quoted item delineated by `quotechar`: >>> s = StringIO('"alpha, #42", 10.0\n"beta, #64", 2.0\n') >>> dtype = np.dtype([("label", "U12"), ("value", float)]) >>> np.loadtxt(s, dtype=dtype, delimiter=",", quotechar='"') array([('alpha, #42', 10.), ('beta, #64', 2.)], dtype=[('label', '>> s = StringIO('"alpha, #42" 10.0\n"beta, #64" 2.0\n') >>> dtype = np.dtype([("label", "U12"), ("value", float)]) >>> np.loadtxt(s, dtype=dtype, delimiter=None, quotechar='"') array([('alpha, #42', 10.), ('beta, #64', 2.)], dtype=[('label', '>> s = StringIO('"Hello, my name is ""Monty""!"') >>> np.loadtxt(s, dtype="U", delimiter=",", quotechar='"') array('Hello, my name is "Monty"!', dtype='>> d = StringIO("1 2\n2 4\n3 9 12\n4 16 20") >>> np.loadtxt(d, usecols=(0, 1)) array([[ 1., 2.], [ 2., 4.], [ 3., 9.], [ 4., 16.]]) N) rrrrrrrrrrrcSs$g|]}t|tr|dn|qS)r)rrrmr)rr[r-r-r.r`szloadtxt..) rrrrrrrrrrr)_loadtxt_with_likerrrmrrfloat64rsr)rrrrrrrrrrrrrrrr-r-r.rPs6~      rc Cs|fSr'r-) rXfmtrnewlineheaderfooterrrr-r-r._savetxt_dispatcherpsr%.18e  r# c  Cs Gddd} d} t|tjr(t|}t|rXt|dtjj j|d|d} d} n"t |drr| ||pld} nt d zt |}|j d ks|j d krt d |j n@|j d kr|jjdurt|j}d } qt|jj} n |jd } t|} t|ttfvr2t|| kr&tdt|||}nt|tr|d}t d|}|d kr| rxd||fg| }n |g| }||}n4| r|d | kr|n| s|| kr|n|}nt d|ft|d kr|dd|}| |||| r`|D]P}g}|D]}||j||j q|t||}| |ddq nj|D]d}z|t||}Wn>t!y}z$t!dt|j|f|WYd}~n d}~00| |qdt|d kr|dd|}| |||W| r| n| r| 0dS)a Save an array to a text file. Parameters ---------- fname : filename, file handle or pathlib.Path If the filename ends in ``.gz``, the file is automatically saved in compressed gzip format. `loadtxt` understands gzipped files transparently. X : 1D or 2D array_like Data to be saved to a text file. fmt : str or sequence of strs, optional A single format (%10.5f), a sequence of formats, or a multi-format string, e.g. 'Iteration %d -- %10.5f', in which case `delimiter` is ignored. For complex `X`, the legal options for `fmt` are: * a single specifier, ``fmt='%.4e'``, resulting in numbers formatted like ``' (%s+%sj)' % (fmt, fmt)`` * a full string specifying every real and imaginary part, e.g. ``' %.4e %+.4ej %.4e %+.4ej %.4e %+.4ej'`` for 3 columns * a list of specifiers, one per column - in this case, the real and imaginary part must have separate specifiers, e.g. ``['%.3e + %.3ej', '(%.15e%+.15ej)']`` for 2 columns delimiter : str, optional String or character separating columns. newline : str, optional String or character separating lines. .. versionadded:: 1.5.0 header : str, optional String that will be written at the beginning of the file. .. versionadded:: 1.7.0 footer : str, optional String that will be written at the end of the file. .. versionadded:: 1.7.0 comments : str, optional String that will be prepended to the ``header`` and ``footer`` strings, to mark them as comments. Default: '# ', as expected by e.g. ``numpy.loadtxt``. .. versionadded:: 1.7.0 encoding : {None, str}, optional Encoding used to encode the outputfile. Does not apply to output streams. If the encoding is something other than 'bytes' or 'latin1' you will not be able to load the file in NumPy versions < 1.14. Default is 'latin1'. .. versionadded:: 1.14.0 See Also -------- save : Save an array to a binary file in NumPy ``.npy`` format savez : Save several arrays into an uncompressed ``.npz`` archive savez_compressed : Save several arrays into a compressed ``.npz`` archive Notes ----- Further explanation of the `fmt` parameter (``%[flag]width[.precision]specifier``): flags: ``-`` : left justify ``+`` : Forces to precede result with + or -. ``0`` : Left pad the number with zeros instead of space (see width). width: Minimum number of characters to be printed. The value is not truncated if it has more characters. precision: - For integer specifiers (eg. ``d,i,o,x``), the minimum number of digits. - For ``e, E`` and ``f`` specifiers, the number of digits to print after the decimal point. - For ``g`` and ``G``, the maximum number of significant digits. - For ``s``, the maximum number of characters. specifiers: ``c`` : character ``d`` or ``i`` : signed decimal integer ``e`` or ``E`` : scientific notation with ``e`` or ``E``. ``f`` : decimal floating point ``g,G`` : use the shorter of ``e,E`` or ``f`` ``o`` : signed octal ``s`` : string of characters ``u`` : unsigned decimal integer ``x,X`` : unsigned hexadecimal integer This explanation of ``fmt`` is not complete, for an exhaustive specification see [1]_. References ---------- .. [1] `Format Specification Mini-Language `_, Python Documentation. Examples -------- >>> x = y = z = np.arange(0.0,5.0,1.0) >>> np.savetxt('test.out', x, delimiter=',') # X is an array >>> np.savetxt('test.out', (x,y,z)) # x,y,z equal sized 1D arrays >>> np.savetxt('test.out', x, fmt='%1.4e') # use exponential notation c@s@eZdZdZddZddZddZdd Zd d Zd d Z dS)zsavetxt..WriteWrapz0Convert to bytes on bytestream inputs. cSs||_||_|j|_dSr')rr first_writedo_write)r+rrr-r-r.r/sz#savetxt..WriteWrap.__init__cSs|jdSr')rr^r8r-r-r.r^sz savetxt..WriteWrap.closecSs||dSr')rr+vr-r-r.rsz savetxt..WriteWrap.writecSs0t|tr|j|n|j||jdSr')rrrmrrencoderr r-r-r. write_bytess z&savetxt..WriteWrap.write_bytescSs|jt|dSr')rrrr r-r-r. write_normalsz'savetxt..WriteWrap.write_normalcSs@z|||j|_Wn$ty:|||j|_Yn0dSr')r rrr r r-r-r.r s     z&savetxt..WriteWrap.first_writeN) r:r;r<r=r/r^rr r rr-r-r-r. WriteWrapsrFwtrTrrz%fname must be a string or file handlerrz.Expected 1D or 2D array, got %dD array insteadrNzfmt has wrong shape. %s%z'fmt has wrong number of %% formats: %sz (%s+%sj)zinvalid fmt: %rrz+--z?Mismatch between array dtype ('%s') and format specifier ('%s'))"rrrArrBrrir^rrrr@rZasarrayrrrrrrfrZ iscomplexobjrr6rr3rsrucountreplacerrUrealimagr)rrrrrrrrrrown_fhrZncolZ iscomplex_XrZ n_fmt_charserrorrowZrow2numbersr rr-r-r.rvs{!                   rc Csd}t|ds0t|}tjjj|d|d}d}zt|tjsHt|}|j durZt d| }t|t r~t|t r~t|}t|dst|}||}|rt|d tst||j d }tj||d }||_ntj||d }|W|r|Sn|r|0dS) a Construct an array from a text file, using regular expression parsing. The returned array is always a structured array, and is constructed from all matches of the regular expression in the file. Groups in the regular expression are converted to fields of the structured array. Parameters ---------- file : file, str, or pathlib.Path Filename or file object to read. .. versionchanged:: 1.22.0 Now accepts `os.PathLike` implementations. regexp : str or regexp Regular expression used to parse the file. Groups in the regular expression correspond to fields in the dtype. dtype : dtype or list of dtypes Dtype for the structured array; must be a structured datatype. encoding : str, optional Encoding used to decode the inputfile. Does not apply to input streams. .. versionadded:: 1.14.0 Returns ------- output : ndarray The output array, containing the part of the content of `file` that was matched by `regexp`. `output` is always a structured array. Raises ------ TypeError When `dtype` is not a valid dtype for a structured array. See Also -------- fromstring, loadtxt Notes ----- Dtypes for structured arrays can be specified in several forms, but all forms specify at least the data type and field name. For details see `basics.rec`. Examples -------- >>> from io import StringIO >>> text = StringIO("1312 foo\n1534 bar\n444 qux") >>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything] >>> output = np.fromregex(text, regexp, ... [('num', np.int64), ('key', 'S3')]) >>> output array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')], dtype=[('num', '>> output['num'] array([1312, 1534, 444]) Fr>rrTNz$dtype must be a structured datatype.matchrr)r@rArBrrrrirrrrrr>rmrsrrecompilefindallrarrayr^) rEregexprrrcontentseqZnewdtypeoutputr-r-r.r#is:>        r#_zf%i)rrcSsh|durBt|||| ||||| | | | |||||||||dSt||durn|r^td|dkrntd|rddlm}m}|pi}t|tstdt ||d krd}d }nd }t|t j rt |}t|t rtjjj|d |d }t|}n|}t|}z t|}Wn<tyP} z"tdt |d| WYd} ~ n d} ~ 00|t||||d}!t| | || d}"zvt D]t|qd}#|#stt||}$d ur|dur||$vrd|$|dd}$|!|$}#qWn.tyd}$g}#tjd|ddYn0d urP|#d }%|durP|%|vrP|#d=| durzdd| dD} Wn<t!yz t"| } Wnty| g} Yn0Yn0t#| p|#}&d ur|"dd|#Dd}$n2t$r|"dddDnr|"dur2t%| | || ddurDt"| rt&| D]>\}'t$|'rt'|'| <n|'dkrR|'t#|#| <qRdurt#|&krԈj(t)fdd| Dt"j*n*durt#|&krfdd| Dndurdurt"j*|p&d}(t|(t+r>|(,d}(d dt|&D}t|(tr*|(-D]\})}*t$|)rz'|)})WntyYqdYn0| rz| '|)})WntyYn0t|*t"t.frd!d|*D}*n t |*g}*|)dur|D]}+|+/|*qn||)/|*qdnt|(t"t.frnt0|(|D]&\},}-t |,},|,|-vrD|-1|,qDnJt|(t r|(d}.|D]}-|-/|.qn|D]}-|-/t |(gq|}/|/durg}/dg|&}t|/tr\|/-D]p\})}*t$|)r$z'|)})Wnty"YqYn0| rNz| '|)})WntyLYn0|*||)<qnHt|/t"t.frt#|/}0|0|&kr|/|d|0<n |/d|&}n |/g|&}durd"dt0||D}nRt2d d#}1t#|1dkrt0|1||}2d$d|2D}nt0||}2fd%d|2D}g}3|-D]\}4t$|4rdz'|4}4|4Wnty`Yq"Yn0n6| rz| '|4WntyYq"Yn0n|4t#|$r|#|4}5nd}5t+urt3}6n"|rd&d'}7t4j5|7d(}6n}6|j6|6d |5||d)|31|6fq"|6|3g j1}8|r:g}9|9j1}:g};|;j1}|>dkr|qV| rz fd*d| D Wn.t9y|< d|>fYqVYn0n"|>|&kr|< d|>fqV|8t. | r|:t.d+dt0 |Dt# |krV q2qVWdn1 sH0Ydu rt&|D]\}?fd,d D}@z|?:|@Wnt; yd-}At|,Wn<t?tf y |Ad.7}A|A|4d |,f;}At?|AYn0 qYn0 qdt#|;}B|Bdk rt# |B|d/|& |dk rt# fd0d|;D}C|;d|B|C};||C8} fd1d|;D}At#|A r|A@dd2d3|A}A| rt|Antj|AtAdd|dk r d|  | r|9d| }9| r$t"t0 fd4dt&|D nt"t0 fd5dt&|D }Ddu rd6d|D}Ed7dt&|ED | rވ rtjd8tjBjCdd fd9d:zfd;d|DD}DWntD yYn0 D]tjE|E< q|Edd}Ft&|ED]<\}GtF|GtjG rtHfdd?t0||ED}It#|Idk rn|I\}J|JtI}K}Ln2fd@dt&|FD}K| rȇfdAdt&|FD}Ln&t"t0|F}Kt"t0tIgt#|F}LtjJ|D|KdB|rtjJ|9|LdB}Mn rj*du r_*t#|1dk rdCdDd=|1Dv rLtK r`_. Examples -------- >>> from io import StringIO >>> import numpy as np Comma delimited file with mixed dtype >>> s = StringIO("1,1.3,abcde") >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ... ('mystring','S5')], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '>> _ = s.seek(0) # needed for StringIO example only >>> data = np.genfromtxt(s, dtype=None, ... names = ['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, 'abcde'), dtype=[('myint', '>> _ = s.seek(0) >>> data = np.genfromtxt(s, dtype="i8,f8,S5", ... names=['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '>> s = StringIO("11.3abcde") >>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'], ... delimiter=[1,3,5]) >>> data array((1, 1.3, 'abcde'), dtype=[('intvar', '>> f = StringIO(''' ... text,# of chars ... hello world,11 ... numpy,5''') >>> np.genfromtxt(f, dtype='S12,S12', delimiter=',') array([(b'text', b''), (b'hello world', b'11'), (b'numpy', b'5')], dtype=[('f0', 'S12'), ('f1', 'S12')]) N)rrr skip_header skip_footerrmissing_valuesfilling_valuesrr excludelist deletechars replace_space autostripcase_sensitive defaultfmtrusemaskloose invalid_raiserrrzPThe keywords 'skip_footer' and 'max_rows' can not be specified at the same time.rz'max_rows' must be at least 1.r) MaskedArraymake_mask_descrzNThe input argument 'converter' should be a valid dictionary (got '%s' instead)rmTFrrz\fname must be a string, a filehandle, a sequence of strings, or an iterator of strings. Got r)rrr-r)r*r+r.r,rz"genfromtxt: Empty input file: "%s"rrcSsg|] }|qSr-striprr%r-r-r.rrzgenfromtxt..rcSsg|]}t|qSr-)rsr7r8r-r-r.rrcSsg|] }|qSr-r6r8r-r-r.rr)r/rr*r+r.r,csg|] }|qSr-r-r8)descrr-r.rrcsg|] }|qSr-r-r8)rr-r.r#rr-rcSsg|]}tdgqSr)r6r8r-r-r.r.rcSsg|] }t|qSr-)rsr8r-r-r.rDrcSsg|]\}}td||dqS)N)r(rzrrmissfillr-r-r.rs)Z flatten_basecSs"g|]\}}}t|d||dqST)lockedr(rzr;)rrr=r>r-r-r.rs cs g|]\}}td||dqSr?r;r<rr-r.rs cSs"t|tur||S||dSNr)rrmr )r[convr-r-r. tobytes_firsts z!genfromtxt..tobytes_firstrB)r@ testing_valuerzr(csg|] }|qSr-r-r8rr-r.rrcSsg|]\}}||vqSr-r6)rr mr-r-r.rscsg|]}t|qSr-)r)r_mrr-r.rrz0Converter #%i is locked and cannot be upgraded: z"(occurred line #%i for value '%s')z- Line #%%i (got %%i columns instead of %i)cs g|]}|dkr|qS)rr-r8)nbrowsr&r-r.r scsg|]\}}||fqSr-r-)rrnb)templater-r.r szSome errors were detected !rcs,g|]$\}fddtt|DqS)csg|]}|qSr-)Z _loose_callr_rrDr-r.r+ r)genfromtxt...maprrrrowsrDr.r+ scs,g|]$\}fddtt|DqS)csg|]}|qSr-)Z _strict_callrLrDr-r.r/ rrNrOrQrRrDr.r/ scSsg|] }|jqSr-rrrBr-r-r.r6 rcSsg|]\}}|tjkr|qSr-)rZstr_)rrr r-r-r.r8 s  zReading unicode strings without specifying the encoding argument is deprecated. Set the encoding, use None for the system default.cs,t|}D]}||d||<q t|SrA)r6r r)Zrow_tuprr) strcolidxr-r.encode_unicode_colsC sz'genfromtxt..encode_unicode_colscsg|] }|qSr-r-)rr)rWr-r.rJ rc3s|]}t|VqdSr'rfrrrHr-r. U rzgenfromtxt..cSsh|]\}}|jr|qSr-)Z_checked)rrZc_typer-r-r. Z szgenfromtxt..csg|]\}}||fqSr-r-rrrr/r-r.rb scsg|]\}}|tfqSr-boolr]r^r-r.re srOcss|] }|jVqdSr')charr8r-r-r.r[w rz4Nested fields involving objects are not supported...cSsg|] }d|fqSr:r-r8r-r-r.r~ rcSsg|] }dtfqSr:r_)rtr-r-r.r rcSsg|] }|jqSr-rTrUr-r-r.r rc3s|]}t|VqdSr'rYrZrHr-r.r[ rcSsg|] }|tfqSr-r_r8r-r-r.r rcsg|]}|dkr|qSr:r-r8rDr-r.r srcsg|] }|qSr-r-r)r$r-r.r r)R_genfromtxt_with_likerrZnumpy.mar3r4rrrrrrArrBrsrrrrirrrrdrrrangenextrrur StopIterationrrr7r3r6rfrrrrr9rrrmrr{rextendrVrUrr functoolspartialupdate itertoolschain IndexErrorZ iterupgraderrPrupgraderinsertr exceptionsZVisibleDeprecationWarningUnicodeEncodeErrorbytes_Z issubdtype charactermaxr`r rNotImplementedErrorviewr(Z_maskrr)Srrrrr&r'rr(r)rrr*r+r,r-r.r/rr0r1r2rrrrr3r4Zuser_convertersrrXZfid_ctxZfhdrZ split_lineZvalidate_namesZ first_values first_lineZfvalZnbcolscurrentZuser_missing_valuesr5rr=rentryZ user_valueZuser_filling_valuesnZ dtype_flatZzipitZ uc_updaterrEZ user_convrCZappend_to_rowsZmasksZappend_to_masksinvalidZappend_to_invalidrZnbvalues converterZcurrent_columnerrmsgZ nbinvalidZnbinvalid_skippedrZ column_typesZsized_column_typesZcol_typeZn_charsbaseZ uniform_typeZddtypeZmdtypeZ outputmaskZrowmasksZ ishomogeneousttyperqZmvalr-)rBr/r9rrWrrrIr$rSr&rVrKr|r.rsL                                                              *                                 rcKsdtjdtdd|dd|dd}t|fi|}|rTdd lm}||}n |t j }|S) a> Load ASCII data from a file and return it in a record array. If ``usemask=False`` a standard `recarray` is returned, if ``usemask=True`` a MaskedRecords array is returned. .. deprecated:: 2.0 Use `numpy.genfromtxt` instead. Parameters ---------- fname, kwargs : For a description of input parameters, see `genfromtxt`. See Also -------- numpy.genfromtxt : generic function Notes ----- By default, `dtype` is None, which means that the data-type of the output array will be determined from the data. zT`recfromtxt` is deprecated, use `numpy.genfromtxt` instead.(deprecated in NumPy 2.0)rr5rNr0Fr MaskedRecords) rrDeprecationWarning setdefaultryrnumpy.ma.mrecordsrrwrrecarray)rrGr0r$rr-r-r. recfromtxt s     rcKstjdtdd|dd|dd|dd |d d t|fi|}|d d }|rxddlm}||}n |t j }|S)a Load ASCII data stored in a comma-separated file. The returned array is a record array (if ``usemask=False``, see `recarray`) or a masked record array (if ``usemask=True``, see `ma.mrecords.MaskedRecords`). .. deprecated:: 2.0 Use `numpy.genfromtxt` with comma as `delimiter` instead. Parameters ---------- fname, kwargs : For a description of input parameters, see `genfromtxt`. See Also -------- numpy.genfromtxt : generic function to load ASCII data. Notes ----- By default, `dtype` is None, which means that the data-type of the output array will be determined from the data. zo`recfromcsv` is deprecated, use `numpy.genfromtxt` with comma as `delimiter` instead. (deprecated in NumPy 2.0)rr5r.lowerrTrrrNr0Frr) rrrrrryrrrwrr)rrGr$r0rr-r-r. recfromcsv s         r)NFTr~)NN)TT)TN)r)NNNNNNN)rrrrrrN)N)Sr=rArrirlrr(rrrrZopindexrcollections.abcrrr$rrrrrZ numpy._corer Znumpy._core.multiarrayr r Znumpy._core._multiarray_umathr Znumpy._core.overridesr rZ_iotoolsrrrrrrrrrrrZ numpy._utilsrr__all__rjZarray_function_dispatchr&rHrIr}rrr rr!rr"rrintrrrrrrfloatrrrrr#rusortedZdefaultdeletecharsrrdrrr-r-r-r.s     41M=  J V A %      s ev -