§
    Vp«f²+  ã                   óž  — d Z ddlmZmZmZmZmZ ddlmZm	Z	m
Z
mZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZm Z m!Z!m"Z"m#Z#m$Z$m%Z%m&Z& ddl'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1m2Z2m3Z3 ddl4m5Z5 ddl6m7Z7m8Z8m9Z9m:Z: ddl;m<Z<m=Z=m>Z> ddl?m@Z@mAZAmBZBmCZCmDZDmEZEmFZFmGZGmHZH dd	lImJZJmKZKmLZLmMZMmNZNmOZOmPZPmQZQmRZRmSZS dd
lTmUZU ddlVmWZWmXZX ddlYmZZZ dS )a­  
.. currentmodule:: jax.experimental.sparse

The :mod:`jax.experimental.sparse` module includes experimental support for sparse matrix
operations in JAX. It is under active development, and the API is subject to change. The
primary interfaces made available are the :class:`BCOO` sparse array type, and the
:func:`sparsify` transform.

Batched-coordinate (BCOO) sparse matrices
-----------------------------------------
The main high-level sparse object currently available in JAX is the :class:`BCOO`,
or *batched coordinate* sparse array, which offers a compressed storage format compatible
with JAX transformations, in particular JIT (e.g. :func:`jax.jit`), batching
(e.g. :func:`jax.vmap`) and autodiff (e.g. :func:`jax.grad`).

Here is an example of creating a sparse array from a dense array:

    >>> from jax.experimental import sparse
    >>> import jax.numpy as jnp
    >>> import numpy as np

    >>> M = jnp.array([[0., 1., 0., 2.],
    ...                [3., 0., 0., 0.],
    ...                [0., 0., 4., 0.]])

    >>> M_sp = sparse.BCOO.fromdense(M)

    >>> M_sp
    BCOO(float32[3, 4], nse=4)

Convert back to a dense array with the ``todense()`` method:

    >>> M_sp.todense()
    Array([[0., 1., 0., 2.],
           [3., 0., 0., 0.],
           [0., 0., 4., 0.]], dtype=float32)

The BCOO format is a somewhat modified version of the standard COO format, and the dense
representation can be seen in the ``data`` and ``indices`` attributes:

    >>> M_sp.data  # Explicitly stored data
    Array([1., 2., 3., 4.], dtype=float32)

    >>> M_sp.indices # Indices of the stored data
    Array([[0, 1],
           [0, 3],
           [1, 0],
           [2, 2]], dtype=int32)

BCOO objects have familiar array-like attributes, as well as sparse-specific attributes:

    >>> M_sp.ndim
    2

    >>> M_sp.shape
    (3, 4)

    >>> M_sp.dtype
    dtype('float32')

    >>> M_sp.nse  # "number of specified elements"
    4

BCOO objects also implement a number of array-like methods, to allow you to use them
directly within jax programs. For example, here we compute the transposed matrix-vector
product:

    >>> y = jnp.array([3., 6., 5.])

    >>> M_sp.T @ y
    Array([18.,  3., 20.,  6.], dtype=float32)

    >>> M.T @ y  # Compare to dense version
    Array([18.,  3., 20.,  6.], dtype=float32)

BCOO objects are designed to be compatible with JAX transforms, including :func:`jax.jit`,
:func:`jax.vmap`, :func:`jax.grad`, and others. For example:

    >>> from jax import grad, jit

    >>> def f(y):
    ...   return (M_sp.T @ y).sum()
    ...
    >>> jit(grad(f))(y)
    Array([3., 3., 4.], dtype=float32)

Note, however, that under normal circumstances :mod:`jax.numpy` and :mod:`jax.lax` functions
do not know how to handle sparse matrices, so attempting to compute things like
``jnp.dot(M_sp.T, y)`` will result in an error (however, see the next section).

Sparsify transform
------------------
An overarching goal of the JAX sparse implementation is to provide a means to switch from
dense to sparse computation seamlessly, without having to modify the dense implementation.
This sparse experiment accomplishes this through the :func:`sparsify` transform.

Consider this function, which computes a more complicated result from a matrix and a vector input:

    >>> def f(M, v):
    ...   return 2 * jnp.dot(jnp.log1p(M.T), v) + 1
    ...
    >>> f(M, y)
    Array([17.635532,  5.158883, 17.09438 ,  7.591674], dtype=float32)

Were we to pass a sparse matrix to this directly, it would result in an error, because ``jnp``
functions do not recognize sparse inputs. However, with :func:`sparsify`, we get a version of
this function that does accept sparse matrices:

    >>> f_sp = sparse.sparsify(f)

    >>> f_sp(M_sp, y)
    Array([17.635532,  5.158883, 17.09438 ,  7.591674], dtype=float32)

Support for :func:`sparsify` includes a large number of the most common primitives, including:

- generalized (batched) matrix products & einstein summations (:obj:`~jax.lax.dot_general_p`)
- zero-preserving elementwise binary operations (e.g. :obj:`~jax.lax.add_p`, :obj:`~jax.lax.mul_p`, etc.)
- zero-preserving elementwise unary operations (e.g. :obj:`~jax.lax.abs_p`, :obj:`jax.lax.neg_p`, etc.)
- summation reductions (:obj:`~jax.lax.reduce_sum_p`)
- general indexing operations (:obj:`~jax.lax.slice_p`, `lax.dynamic_slice_p`, `lax.gather_p`)
- concatenation and stacking (:obj:`~jax.lax.concatenate_p`)
- transposition & reshaping ((:obj:`~jax.lax.transpose_p`, :obj:`~jax.lax.reshape_p`,
  :obj:`~jax.lax.squeeze_p`, :obj:`~jax.lax.broadcast_in_dim_p`)
- some higher-order functions (:obj:`~jax.lax.cond_p`, :obj:`~jax.lax.while_p`, :obj:`~jax.lax.scan_p`)
- some simple 1D convolutions (:obj:`~jax.lax.conv_general_dilated_p`)

Nearly any :mod:`jax.numpy` function that lowers to these supported primitives can be used
within a sparsify transform to operate on sparse arrays. This set of primitives is enough
to enable relatively sophisticated sparse workflows, as the next section will show.

Example: sparse logistic regression
-----------------------------------
As an example of a more complicated sparse workflow, let's consider a simple logistic regression
implemented in JAX. Notice that the following implementation has no reference to sparsity:

    >>> import functools
    >>> from sklearn.datasets import make_classification
    >>> from jax.scipy import optimize

    >>> def sigmoid(x):
    ...   return 0.5 * (jnp.tanh(x / 2) + 1)
    ...
    >>> def y_model(params, X):
    ...   return sigmoid(jnp.dot(X, params[1:]) + params[0])
    ...
    >>> def loss(params, X, y):
    ...   y_hat = y_model(params, X)
    ...   return -jnp.mean(y * jnp.log(y_hat) + (1 - y) * jnp.log(1 - y_hat))
    ...
    >>> def fit_logreg(X, y):
    ...   params = jnp.zeros(X.shape[1] + 1)
    ...   result = optimize.minimize(functools.partial(loss, X=X, y=y),
    ...                              x0=params, method='BFGS')
    ...   return result.x

    >>> X, y = make_classification(n_classes=2, random_state=1701)
    >>> params_dense = fit_logreg(X, y)
    >>> print(params_dense)  # doctest: +SKIP
    [-0.7298445   0.29893667  1.0248291  -0.44436368  0.8785025  -0.7724008
     -0.62893456  0.2934014   0.82974285  0.16838408 -0.39774987 -0.5071844
      0.2028872   0.5227761  -0.3739224  -0.7104083   2.4212713   0.6310087
     -0.67060554  0.03139788 -0.05359547]

This returns the best-fit parameters of a dense logistic regression problem.
To fit the same model on sparse data, we can apply the :func:`sparsify` transform:

    >>> Xsp = sparse.BCOO.fromdense(X)  # Sparse version of the input
    >>> fit_logreg_sp = sparse.sparsify(fit_logreg)  # Sparse-transformed fit function
    >>> params_sparse = fit_logreg_sp(Xsp, y)
    >>> print(params_sparse)  # doctest: +SKIP
    [-0.72971725  0.29878938  1.0246326  -0.44430563  0.8784217  -0.77225566
     -0.6288222   0.29335397  0.8293481   0.16820715 -0.39764675 -0.5069753
      0.202579    0.522672   -0.3740134  -0.7102678   2.4209507   0.6310593
     -0.670236    0.03132951 -0.05356663]
é    )ÚjacfwdÚjacobianÚjacrevÚgradÚvalue_and_grad)Úbcoo_broadcast_in_dimÚbcoo_concatenateÚbcoo_conv_general_dilatedÚbcoo_dot_generalÚbcoo_dot_general_pÚbcoo_dot_general_sampledÚbcoo_dot_general_sampled_pÚbcoo_dynamic_sliceÚbcoo_extractÚbcoo_extract_pÚbcoo_fromdenseÚbcoo_fromdense_pÚbcoo_gatherÚbcoo_multiply_denseÚbcoo_multiply_sparseÚbcoo_update_layoutÚbcoo_reduce_sumÚbcoo_reshapeÚbcoo_revÚ
bcoo_sliceÚbcoo_sort_indicesÚbcoo_sort_indices_pÚbcoo_spdot_general_pÚbcoo_squeezeÚbcoo_sum_duplicatesÚbcoo_sum_duplicates_pÚbcoo_todenseÚbcoo_todense_pÚbcoo_transposeÚbcoo_transpose_pÚBCOO)Úbcsr_broadcast_in_dimÚbcsr_concatenateÚbcsr_dot_generalÚbcsr_dot_general_pÚbcsr_extractÚbcsr_extract_pÚbcsr_fromdenseÚbcsr_fromdense_pÚbcsr_sum_duplicatesÚbcsr_todenseÚbcsr_todense_pÚBCSR)Ú	JAXSparse)ÚemptyÚeyeÚtodenseÚ	todense_p)ÚCuSparseEfficiencyWarningÚSparseEfficiencyErrorÚSparseEfficiencyWarning)	Úcoo_fromdenseÚcoo_fromdense_pÚ
coo_matmatÚcoo_matmat_pÚ
coo_matvecÚcoo_matvec_pÚcoo_todenseÚcoo_todense_pÚCOO)
Úcsr_fromdenseÚcsr_fromdense_pÚ
csr_matmatÚcsr_matmat_pÚ
csr_matvecÚcsr_matvec_pÚcsr_todenseÚcsr_todense_pÚCSCÚCSR)Úrandom_bcoo)ÚsparsifyÚSparseTracer)ÚlinalgN)[Ú__doc__Újax.experimental.sparse.adr   r   r   r   r   Újax.experimental.sparse.bcoor   r	   r
   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r   r    r!   r"   r#   r$   r%   r&   Újax.experimental.sparse.bcsrr'   r(   r)   r*   r+   r,   r-   r.   r/   r0   r1   r2   Újax.experimental.sparse._baser3   Újax.experimental.sparse.apir4   r5   r6   r7   Újax.experimental.sparse.utilr8   r9   r:   Újax.experimental.sparse.coor;   r<   r=   r>   r?   r@   rA   rB   rC   Újax.experimental.sparse.csrrD   rE   rF   rG   rH   rI   rJ   rK   rL   rM   Újax.experimental.sparse.randomrN   Ú!jax.experimental.sparse.transformrO   rP   Újax.experimental.sparserQ   © ó    ú`/var/www/html/nettyfy-visnx/env/lib/python3.11/site-packages/jax/experimental/sparse/__init__.pyú<module>ra      sø  ððnð nðfð ð ð ð ð ð ð ð ð ð ð ð ð ð ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ð  ðDð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ðð ð ð ð ð ðð ð ð ð ð ð ð ð ð ð ð ðð ð ð ð ð ð ð ð ð ð
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ð 
ðð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð ð FÐ EÐ EÐ EÐ EÐ Eðð ð ð ð ð ð ð ð
 5Ð 4Ð 4Ð 4Ð 4Ð 4Ð 4Ð 4r_   