Automatic Sparsity Detection for Nonlinear Equations: What You Need to Know | HackerNoon
Briefly

The article discusses an approximate algorithm for Jacobian sparsity detection that reduces overhead in smaller systems with clear sparsity patterns. By computing a dense Jacobian for randomly generated inputs, the algorithm attempts to reveal the non-zero elements of the Jacobian efficiently. Though it scales poorly for larger systems due to issues like floating point errors, it still proves advantageous for moderately sized problems, allowing the use of sparse linear solvers. This enhances computational efficiency as opposed to solving large dense linear systems, aligning with advancements in numerical methods for nonlinear equations.
Symbolic sparsity detection has a high overhead for smaller systems with well-defined sparsity patterns. We provide an approximate algorithm to determine the Jacobian sparsity pattern in those setups.
As evident, computing the sparsity pattern costs n times the cost of computing the dense Jacobian, typically via automatic forward mode differentiation.
Approximate sparsity detection has poor scaling beyond a certain problem size... our method fails to accurately predict the sparsity pattern in the presence of state-dependent branches.
Regardless, we observe that approximate sparsity detection is extremely efficient for moderately sized problems, enabling sparse linear solvers that are significantly more efficient.
Read at Hackernoon
[
|
]