Next Article in Journal
Hybrid Convolutional Vision Transformer for Robust Low-Channel sEMG Hand Gesture Recognition: A Comparative Study with CNNs
Next Article in Special Issue
Enhancing Neuromorphic Robustness via Recurrence Resonance: The Role of Shared Weak Attractors in Quantum Logic Networks
Previous Article in Journal
Stimuli-Responsive Chitosan Hydrogels for Diabetic Wound Management: Comprehensive Review of Emerging Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Portfolio Optimization: A Neurodynamic Approach Based on Spiking Neural Networks

1
School of Artificial Intelligence (AI), Taizhou University, Taizhou 318000, China
2
Smart City Research Institute, The Hong Kong Polytechnic University, Hong Kong
3
Department of Computing (COMP), The Hong Kong Polytechnic University, Hong Kong
4
Faculty of Information Technology and Electrical Engineering (ITEE), University of Oulu, 90100 Oulu, Finland
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(12), 808; https://doi.org/10.3390/biomimetics10120808
Submission received: 21 October 2025 / Revised: 15 November 2025 / Accepted: 28 November 2025 / Published: 2 December 2025

Abstract

Portfolio optimization is fundamental to modern finance, enabling investors to construct allocations that balance risk and return while satisfying practical constraints. When transaction costs and cardinality limits are incorporated, the problem becomes a computationally demanding mixed-integer quadratic program. This work demonstrates how principles from biomimetics—specifically, the computational strategies employed by biological neural systems—can inspire efficient algorithms for complex optimization problems. We demonstrate that this problem can be reformulated as a constrained quadratic program and solved using dynamics inspired by spiking neural networks. Building on recent theoretical work showing that leaky integrate-and-fire dynamics naturally implement projected gradient descent for convex optimization, we develop a solver that alternates between continuous gradient flow and discrete constraint projections. By mimicking the event-driven, energy-efficient computation observed in biological neurons, our approach offers a biomimetic pathway to solving computationally intensive financial optimization problems. We implement the approach in Python and evaluate it on portfolios of 5 to 50 assets using five years of market data, comparing solution quality against mixed-integer solvers (ECOS_BB), convex relaxations (OSQP), and particle swarm optimization. Experimental results demonstrate that the SNN solver achieves the highest expected return (0.261% daily) among all evaluated methods on the 50-asset portfolio, outperforming exact MIQP (0.225%) and PSO (0.092%), with runtimes ranging from 0.5 s for small portfolios to 8.4 s for high-quality schedules on large portfolios. While current Python runtimes are comparable to existing approaches, the key contribution is establishing a path to neuromorphic hardware deployment: specialized SNN processors could execute these dynamics orders of magnitude faster than conventional architectures, enabling real-time portfolio rebalancing at institutional scale.

Graphical Abstract

1. Introduction

Portfolio optimization with transaction costs and cardinality constraints presents a fundamental challenge in quantitative finance: determining an optimal asset allocation that balances expected returns against risk while respecting practical trading constraints. The classical Markowitz mean-variance framework [1] provides an elegant analytical solution for unconstrained continuous portfolios, but real-world requirements—transaction fees [2], regulatory limits on diversification, and the need for sparse holding—transform the problem into a mixed-integer quadratic program [3] that resists efficient solution. Exact branch-and-bound methods [4] deliver provably optimal solutions but scale poorly with problem size, while metaheuristic approaches [5,6,7] sacrifice optimality guarantees for computational speed without offering a clear path to further acceleration.
Recent theoretical advances in computational neuroscience have revealed a promising alternative rooted in the dynamics of spiking neural networks [8]. Building on seminal work connecting recurrent neural networks to optimization [9,10,11], researchers have demonstrated that leaky integrate-and-fire (LIF) neuron models naturally implement projected gradient descent for constrained convex programs [12,13,14]. In this framework, continuous membrane potential integration corresponds to gradient flow on an objective function, while discrete threshold-triggered spike events enforce constraints through boundary projections. Stanojevic et al. [15] recently showed this principle scales to deep architectures with remarkable efficiency, achieving competitive performance with only 0.3 spikes per neuron. This sparsity metric demonstrates the computational efficiency of spike-based constraint enforcement: the low spike rate indicates that SNN dynamics can achieve effective optimization with minimal discrete events, making the approach computationally efficient and suitable for real-time applications. The 0.3 spikes per neuron result, achieved on image classification tasks, illustrates that spiking networks can match artificial neural network performance while maintaining extreme sparsity in their communication patterns. For portfolio optimization, this suggests that constraint violations can be corrected efficiently through sparse projection spikes rather than continuous monitoring, enabling energy-efficient implementation on neuromorphic hardware. Readers interested in the theoretical foundations and training dynamics underlying this sparsity result are referred to the original publication [15] for detailed analysis. This biological computation model suggests an algorithmic strategy: if portfolio optimization can be reformulated as a constrained quadratic program compatible with LIF dynamics, it becomes amenable to implementation on specialized neuromorphic processors designed to execute spiking computations with extreme energy efficiency.
This work aligns with the core principles of biomimetics by drawing direct inspiration from biological neural computation mechanisms. Biological neurons achieve remarkable computational efficiency through event-driven signaling: rather than continuously transmitting information, neurons integrate inputs over time and emit discrete action potentials (spikes) only when threshold conditions are met. This sparse, asynchronous communication strategy enables biological neural networks to process complex information while maintaining extremely low energy consumption. The human brain operates on approximately 20 watts, far less than conventional computers performing similar computational tasks. Our approach mimics this biological strategy by translating portfolio optimization into a framework where continuous gradient integration (analogous to membrane potential dynamics) alternates with discrete projection spikes (analogous to action potentials) that enforce constraints. This biomimetic design not only provides algorithmic advantages but also establishes a natural pathway to neuromorphic hardware deployment, where specialized processors can execute these dynamics with the same energy-efficient, event-driven architecture that biological systems employ. By learning from nature’s computational strategies, we demonstrate how biomimetic principles can address fundamental challenges in quantitative finance while opening new avenues for energy-efficient computing.
The biomimetic perspective extends beyond mere algorithmic inspiration to encompass the broader philosophy of learning from biological systems to solve engineering challenges. Just as biomimetics has informed innovations in materials science (e.g., gecko-inspired adhesives), robotics (e.g., bird-inspired flight), and energy systems (e.g., photosynthesis-inspired solar cells), our work demonstrates how neural computation strategies can inform optimization algorithms. The key insight is that biological systems have evolved highly efficient solutions to complex problems through millions of years of natural selection, and these solutions often outperform engineered alternatives in terms of energy efficiency, robustness, and adaptability. By adopting the event-driven, sparse communication paradigm of biological neurons, we develop an optimization approach that inherits these advantages: reduced computational overhead, natural parallelization, and compatibility with emerging neuromorphic hardware platforms. This interdisciplinary synthesis of neuroscience, optimization theory, and computational finance exemplifies how biomimetic principles can drive innovation across traditional disciplinary boundaries.
This connection is more than theoretical. Neuromorphic hardware platforms such as Intel Loihi [16,17] and SpiNNaker [18] implement massively parallel, event-driven architectures optimized for spiking neural network execution. IBM’s TrueNorth [19] demonstrated that a million-neuron chip could operate with remarkable energy efficiency. Unlike traditional von Neumann processors that execute sequential instructions, these systems compute through distributed local operations, precisely the structure required for gradient-and-projection dynamics. Deploying portfolio optimization on such hardware could compress multi-second runtimes to microseconds while consuming orders of magnitude less energy, enabling real-time rebalancing at institutional scales currently impractical for conventional solvers.
This paper develops the mathematical and algorithmic foundation for neuromorphic portfolio optimization. We reformulate the transaction-cost-aware, cardinality-constrained Markowitz model as a constrained quadratic program with linear inequalities, implement an SNN-inspired solver that alternates between continuous gradient integration and discrete projection spikes, and demonstrate through comprehensive experiments on real equity data that the approach produces solutions competitive with exact mixed-integer programming and superior to metaheuristics. While our Python proof-of-concept achieves runtimes comparable to conventional methods, the primary contribution extends beyond current performance: we establish the pathway to neuromorphic deployment that could revolutionize computational finance through hardware-accelerated optimization.
Specifically, the main contributions are as follows:
  • Theoretical formulation: We establish the mathematical connection between constrained portfolio optimization and LIF neuron dynamics, reformulating the mixed-integer Markowitz model as a constrained quadratic program where gradient descent drives portfolio weights toward optimality while projection spikes enforce relaxed continuous constraints during optimization (including transaction cost budgets and relaxed binary selectors), with discrete cardinality requirements recovered through deterministic post-processing.
  • Algorithm realization: We implement the gradient-and-spike dynamics as a practical solver (Section 4) using adaptive ODE integration with event detection for constraint violations, demonstrating that the alternating continuous-discrete structure achieves O ( N 2 ) per-step complexity (where “per-step” refers to one ODE integration step during the continuous gradient descent phase) dominated by covariance matrix operations, with projection spikes requiring only O ( N ) operations due to sparse matrix-vector products. These operations are naturally parallelizable on neuromorphic hardware.
  • Comprehensive evaluation: We conduct systematic parameter sweeps across four portfolio sizes using five years of prices for 100 liquid equities, compare against exact MIQP (ECOS_BB), convex relaxation (OSQP + 1 ), and particle swarm optimization, and demonstrate that the SNN approach lies on the Pareto frontier of runtime versus solution quality while uniquely enabling neuromorphic deployment.
  • Hardware deployment pathway: We analyze the computational characteristics that make the SNN dynamics amenable to specialized processors, describe the mapping from algorithm to neuromorphic architecture, and quantify the potential speedup from event-driven parallel execution on platforms like Intel Loihi.
The remainder of the paper proceeds as follows. Section 2 positions our work relative to neural dynamics for optimization, neuromorphic computing, and portfolio selection methods. Section 3 establishes the mathematical foundation connecting LIF dynamics to projected gradient descent and formalizes the constrained portfolio problem. Section 4 describes the solver implementation and discusses neuromorphic hardware considerations. Section 5 presents experimental validation across multiple scales and baseline comparisons. Section 6 concludes with limitations and future directions for hardware deployment.

2. Related Work

This section positions our contribution relative to three interconnected research areas: the theoretical foundations connecting neural dynamics to constrained optimization, the evolution of neuromorphic computing platforms that enable efficient implementation of spiking networks, and the portfolio optimization literature that has explored both exact and heuristic methods for handling transaction costs and cardinality constraints. We emphasize how our work synthesizes insights from these domains to establish a practical pathway for hardware-accelerated financial optimization.

2.1. Neural Dynamics for Constrained Optimization

The connection between neural networks and optimization has deep roots in computational neuroscience. Hopfield [9,10] pioneered this area by demonstrating that recurrent networks with symmetric connections perform gradient descent on energy functions, establishing that neural dynamics can solve optimization problems through collective computational processes. This seminal work showed that biological neural circuits naturally minimize objectives through distributed local operations; a principle that remains central to modern neuromorphic computing.
Building on Hopfield’s foundation, researchers developed projection neural networks specifically designed for constrained optimization [20,21]. These networks implement projected gradient descent through continuous-time dynamics that alternate between gradient flow and boundary projections, directly enforcing linear inequality constraints. Xia and Wang [21] established global convergence guarantees for projection networks solving monotone variational inequalities, demonstrating that neural architectures can reliably solve constrained convex programs. Liu and Wang [22] extended this framework to handle both equality constraints and box constraints simultaneously, showing that single-layer networks suffice for complex feasible regions when equipped with appropriate projection operators.
Recent advances in spiking neural networks have strengthened the optimization connection by linking discrete spike events to constraint enforcement. Boerlin et al. [13] showed that networks of spiking neurons perform predictive coding through a form of constrained optimization, where spike events minimize prediction errors subject to metabolic costs. Barrett and Deneve [12] proved that networks of leaky integrate-and-fire neurons implicitly solve convex optimization problems, where membrane voltage thresholds implement inequality constraints and spike timing encodes optimal solutions. This work revealed that the alternation between continuous integration and discrete resets, the hallmark of LIF dynamics, directly corresponds to projected gradient descent. Mancoo et al. [14] unified these insights by demonstrating that balanced spiking networks solve quadratic programs with linear constraints, establishing the mathematical equivalence between spike-based computation and convex optimization that forms the theoretical foundation of our work. Stanojevic et al. [15] demonstrated that this principle scales to deep architectures, achieving competitive performance on recognition tasks while maintaining the sparse, event-driven computation that makes SNNs amenable to neuromorphic hardware. Recent developments in SNN training methodologies have further advanced the field: Perin et al. [23] introduced an alternating direction method of multipliers (ADMM) approach for SNN training that addresses the non-differentiability of spike functions while providing convergence guarantees, demonstrating that principled optimization frameworks can enhance SNN performance and reliability.
Our work extends this theoretical lineage to financial optimization. While previous projection networks focused on general constrained programs or robotic control [24], we specialize the framework to portfolio selection with transaction costs and cardinality requirements. Crucially, we identify the specific problem structure (constrained quadratic program with sparse linear inequalities) that enables mapping to LIF dynamics, and demonstrate that the resulting solver achieves solution quality competitive with exact mixed-integer methods while uniquely enabling neuromorphic deployment. Table 1 summarizes this theoretical progression from Hopfield’s pioneering work to modern SNN-based optimization, positioning our contribution within this intellectual lineage.

2.2. Neuromorphic Computing Platforms

The algorithmic insights linking neural dynamics to optimization would remain theoretical without specialized hardware capable of efficient SNN execution. Neuromorphic processors address this need through radically different architectures than traditional von Neumann machines. IBM’s TrueNorth [19] pioneered large-scale neuromorphic integration with a million-neuron chip operating at ultra-low power, demonstrating that spike-based computation could be implemented efficiently in silicon. Intel’s Loihi [16] implements asynchronous, event-driven computation where individual neurons communicate through discrete spike events, consuming energy only when active. This matches the sparse communication patterns of biological neural networks and the projection-spike structure of our portfolio solver. Davies et al. demonstrated that Loihi achieves 1000 × energy efficiency improvements over GPUs for certain SNN workloads, validating the potential for orders-of-magnitude acceleration. Recent work [17] has further improved Loihi 2’s spike processing throughput and demonstrated efficient implementation of neuromorphic signal processing algorithms. Recent evaluations [25] have comprehensively benchmarked neuromorphic processors including Intel Loihi and BrainScaleS against standard AI accelerators across diverse application domains, demonstrating competitive performance with substantially lower power consumption for spike-based algorithms.
SpiNNaker [18] takes a complementary approach, implementing massively parallel ARM cores specialized for spike routing and synaptic processing. The architecture supports millions of simulated neurons executing in real time, enabling large-scale network deployment. Recent extensions have improved spike processing throughput and on-chip learning capabilities, making these platforms increasingly viable for practical applications beyond neuroscience research. Contemporary research [26] has addressed the practical challenges of deploying SNNs on neuromorphic hardware by developing training methods that explicitly account for hardware constraints such as limited neuron and synapse counts and low bit-width representations, enabling efficient real-world deployment of SNN-based algorithms.
The key architectural feature enabling our portfolio optimization application is the natural mapping between algorithm primitives and neuromorphic operations. Gradient descent (continuous voltage integration) maps to leak dynamics inherent in LIF neurons. Constraint checking (inequality evaluation) corresponds to threshold comparison. Projection spikes (discrete corrections) align with neuronal reset events. Matrix-vector products for covariance evaluation distribute across parallel synaptic operations. This structural compatibility distinguishes our SNN-based approach from both exact solvers, which require sequential branching operations unsuitable for parallel hardware, and metaheuristics, which lack a principled mapping to specialized processors. Table 2 summarizes the capabilities of major neuromorphic platforms and their suitability for deploying optimization algorithms like ours.

2.3. Portfolio Optimization Methods

The portfolio optimization literature has explored diverse approaches to handle transaction costs and cardinality constraints, each with characteristic trade-offs. Salo et al. [28] provide a comprehensive review of fifty years of portfolio optimization research, documenting the evolution from classical mean-variance theory to modern approaches incorporating machine learning, robust optimization, and alternative risk measures, highlighting the persistent computational challenges posed by real-world constraints. Exact methods based on mixed-integer programming [3,4] formulate the problem with binary variables indicating asset selection and apply branch-and-bound algorithms to explore the combinatorial solution space. While provably optimal, these methods exhibit exponential worst-case complexity, making them impractical for large universes or time-critical applications. Extensions incorporating robust optimization [29,30] or risk measures beyond variance [31] further increase computational burden, motivating research on faster alternatives.
Metaheuristic algorithms offer computational tractability by replacing exhaustive search with stochastic exploration [32,33]. Particle swarm optimization [6,34] maintains populations of candidate portfolios that evolve based on individual and collective performance. Genetic algorithms [35,36] encode portfolios as binary strings and apply selection, crossover, and mutation operators inspired by biological evolution. Beetle antennae search [37] reduces population requirements through targeted sampling. Khan et al. [7] provided a comprehensive review of metaheuristic approaches for cardinality-constrained portfolios, highlighting their computational advantages but noting the lack of optimality guarantees. While these methods find acceptable solutions more quickly than exact approaches, they require extensive parameter tuning, lack theoretical convergence analysis, and provide no clear path to hardware acceleration.
Recent work has integrated machine learning into portfolio selection, using deep neural networks for return prediction [38] or reinforcement learning for dynamic rebalancing [39]. However, these sophisticated learning approaches increase rather than decrease computational requirements, making real-time deployment challenging. They also do not address the fundamental mixed-integer structure of cardinality constraints.
Modern convex optimization solvers [40,41] provide efficient solutions for continuous relaxations of portfolio problems. OSQP (Operator Splitting Quadratic Program) [40] achieves remarkable performance for large-scale quadratic programs through first-order splitting methods, making it a strong baseline for comparison. However, these solvers require post-processing to recover discrete solutions and lack the hardware acceleration pathway offered by neuromorphic computing.
Early research on transaction costs [2] established that even small trading fees significantly alter optimal portfolios, leading to sparse solutions where many assets receive zero allocation. This observation motivates the cardinality constraint formulation we adopt: explicitly limiting portfolio size through integer constraints rather than relying on regularization to induce sparsity. Our SNN-based approach addresses the resulting mixed-integer program through continuous relaxation during optimization combined with deterministic post-processing for discrete recovery, achieving a practical middle ground between exact methods and heuristics.
Critically, none of the existing portfolio optimization methods, exact, metaheuristic, or machine learning-based, offer a clear pathway to specialized hardware acceleration. Exact solvers require sequential decision trees incompatible with parallel architectures. Metaheuristics involve population management and random sampling that map poorly to neuromorphic substrates. Learning-based methods require substantial training infrastructure. Our SNN formulation uniquely bridges computational finance and neuromorphic computing by establishing the theoretical connection (constrained QP to LIF dynamics), implementing a practical solver (gradient-and-spike algorithm), and demonstrating competitive solution quality that justifies the hardware investment.

2.4. Comparison with Existing Optimization Algorithms

To clarify the distinctive characteristics of our SNN-based approach, we explicitly contrast it with existing optimization methods across several key dimensions: algorithmic structure, computational complexity, constraint handling, and hardware deployment potential.
SNN vs. Exact MIQP Solvers: Exact mixed-integer quadratic programming methods [3,4] such as ECOS_BB employ branch-and-bound algorithms that systematically explore the discrete solution space through sequential decision trees. While these methods provide optimality guarantees, they exhibit exponential worst-case complexity and require sequential branching operations that cannot be parallelized effectively. In contrast, our SNN approach solves a relaxed continuous problem using parallel gradient-projection dynamics, recovering discrete solutions through deterministic post-processing. This relaxation-then-recovery strategy allows exploration of a broader solution space before discrete constraint enforcement, potentially identifying superior local optima as demonstrated in our experimental results (Section 5). Critically, the SNN dynamics map naturally to parallel neuromorphic hardware [16,17], while branch-and-bound algorithms fundamentally require sequential execution incompatible with specialized processors.
SNN vs. Convex Relaxation Methods: Convex relaxation approaches [40,41] such as OSQP + 1 solve continuous approximations efficiently but require post-processing heuristics to recover discrete solutions. These methods lack principled mechanisms for enforcing cardinality constraints during optimization, relying instead on regularization [42] to promote sparsity followed by thresholding. Our SNN method integrates constraint enforcement directly into the optimization dynamics through projection spikes, providing a principled mechanism for maintaining feasibility throughout the solution process. While both approaches use post-processing for final discrete recovery, the SNN dynamics naturally drive selector variables toward binary values (0 or 1) through the optimization trajectory, making the recovery procedure more reliable [43,44]. Additionally, the SNN formulation provides a clear pathway to neuromorphic hardware deployment [25,26], while convex solvers remain limited to conventional architectures.
SNN vs. Metaheuristic Approaches: Metaheuristic methods such as particle swarm optimization [5,6] and genetic algorithms [36] replace exhaustive search with stochastic exploration, trading optimality guarantees for computational speed. These methods require extensive parameter tuning, lack theoretical convergence analysis, and involve population management and random sampling operations that map poorly to specialized hardware. Khan et al. [7] provided a comprehensive review of metaheuristic approaches for cardinality-constrained portfolios, highlighting their computational advantages but noting the lack of optimality guarantees. Our SNN approach builds on the theoretical foundation of projected gradient descent [12,21] for constrained quadratic programs, providing principled convergence behavior and requiring minimal parameter tuning (two key parameters k 0 and k 1 with clear physical interpretations). The deterministic gradient-projection structure maps directly to neuromorphic hardware, while metaheuristics’ stochastic nature precludes efficient hardware acceleration.
SNN vs. Other Neural Dynamics Methods: Previous projection neural networks [20,21] implement continuous-time dynamics for constrained optimization but operate in analog voltage space without discrete spike events. While these methods share the projected gradient descent foundation, they lack the event-driven computation model that enables neuromorphic hardware deployment. Our SNN approach explicitly models discrete spike events that correspond to constraint violations, mirroring the biological computation model and enabling direct mapping to spike-based neuromorphic processors. This spike-based structure provides natural sparsity (most neurons remain silent), enabling energy-efficient implementation on specialized hardware designed for event-driven computation.
The unique advantage of our SNN formulation lies in its dual nature: it combines the theoretical rigor of projected gradient descent with the practical pathway to neuromorphic hardware deployment. Unlike exact solvers (sequential branching), convex relaxations (no hardware mapping), or metaheuristics (stochastic operations), the SNN dynamics operate through simple local computations: vector operations, threshold comparisons, and fixed-magnitude updates that map directly to neuromorphic processor architectures. This structural compatibility, combined with competitive solution quality demonstrated in our experiments, establishes the SNN approach as a promising direction for real-time portfolio optimization at institutional scales.

3. Preliminaries

This section establishes the mathematical foundation for our approach. We first review the leaky integrate-and-fire neuron model and explain how its dynamics naturally implement projected gradient descent for constrained optimization. We then formulate the portfolio selection problem with transaction costs and cardinality constraints, showing how it can be cast as a constrained quadratic program amenable to SNN dynamics.

3.1. Spiking Neural Network Dynamics

Spiking neural networks model individual neurons as dynamic systems that integrate input currents over time and emit discrete spike events when an internal state variable crosses a threshold. The leaky integrate-and-fire (LIF) neuron, the most widely studied SNN model, captures essential computational properties while remaining mathematically tractable.

3.1.1. Leaky Integrate and Fire Neuron Model

The membrane potential dynamics of a single LIF neuron follow the first-order differential equation
τ m d V m d t = ( V m V rest ) + R m I ( t )
where V m ( t ) represents the membrane potential, τ m is the membrane time constant, V rest denotes the resting potential, R m is the membrane resistance, and I ( t ) represents the input current.
The dynamics in (1) describe a leaky integrator: when I ( t ) = 0 , the membrane potential exponentially decays toward V rest with time constant τ m . Positive input currents drive V m above V rest , while negative currents drive it below.
When the membrane potential reaches a threshold value V th , the neuron emits a spike and the voltage undergoes a discrete reset:
if V m V th , then V m V reset
where V reset < V th is the reset potential. For our optimization application, we focus on these essential threshold and reset mechanics; secondary physiological details such as refractory periods are not required.

3.1.2. Connection to Constrained Optimization

The key insight connecting LIF dynamics to optimization is recognizing that voltage thresholds naturally implement inequality constraints. Consider a constrained optimization problem
minimize y f ( y ) subject to C y + d 0
where y R n is the optimization variable, f : R n R is the objective function, C R m × n defines m linear inequality constraints, and d R m specifies constraint offsets.
Gradient descent on the objective follows
d y d t = k 0 f ( y )
where k 0 > 0 is the step size and f denotes the gradient. Comparing (4) with the LIF dynamics (1), we observe a structural correspondence: if we interpret V m as y and I ( t ) as encoding negative gradient information, the neuron dynamics implement gradient descent.
The threshold mechanism (2) in LIF neurons provides a natural way to enforce constraints. When constraint violations are detected (analogous to V m V th ), a discrete correction event (spike) projects the state back toward feasibility. If a set of constraints V is violated (i.e., rows with [ C y + d ] i > 0 ), we apply a correction along their signed normals:
y y k 1 C 1 V
where k 1 > 0 controls the spike magnitude and 1 V R m is an indicator vector with [ 1 V ] i = 1 if constraint i is violated (i.e., [ C y + d ] i > 0 ) and [ 1 V ] i = 0 otherwise, where V = { i : [ C y + d ] i > 0 } denotes the set of violated constraint indices. These repeated corrections resemble LIF resets and keep the trajectory close to the feasible region.
This alternation between continuous gradient descent (4) and discrete constraint projections (5) forms the basis of our SNN-inspired optimization algorithm. The continuous phase minimizes the objective, while discrete correction events maintain feasibility, analogous to how neurons integrate inputs continuously but generate spikes discretely when thresholds are crossed.

3.2. Portfolio Optimization Problem Formulation

We now formulate the constrained portfolio optimization problem that will be solved using the SNN-inspired framework in Section 4. The formulation extends the classical Markowitz mean-variance model with transaction costs and cardinality constraints to reflect realistic trading conditions.

3.2.1. Basic Notation and Markowitz Model

Consider a financial market with N available stocks denoted S 1 , S 2 , , S N . Historical price data is used to estimate the mean return rate μ i and return covariance σ i j for each pair of stocks, where i , j { 1 , 2 , , N } . We organize these statistics into a mean return vector μ R 1 × N and covariance matrix Σ R N × N :
μ = μ 1 μ 2 μ N
Σ = σ 11 σ 12 σ 1 N σ 21 σ 22 σ 2 N σ N 1 σ N 2 σ N N
The covariance matrix Σ is symmetric with σ i j = σ j i , and positive semidefinite under standard assumptions.
Let t = [ t 1 , t 2 , , t N ] denote the portfolio allocation vector, where t i represents the fraction of total capital invested in stock S i . We normalize the total investment to unity, so i = 1 N t i = 1 or equivalently 1 t = 1 where 1 is a row vector of ones.
The expected return of the portfolio is given by
μ ¯ ( t ) = t μ
and the portfolio variance, representing risk, is
Σ ¯ ( t ) = t Σ t
The classical Markowitz mean-variance optimization seeks to minimize risk for a target expected return μ r e q :
minimize t t Σ t subject to t μ = μ r e q 1 t = 1 0 t i 1 , i = 1 , , N
While this formulation is elegant and admits efficient solution methods, it lacks several features necessary for practical implementation.

3.2.2. Transaction Costs

Real-world trading incurs transaction costs including brokerage fees, bid-ask spreads, and market impact. A common model assumes costs proportional to trade size, leading to the linear transaction cost model. Let α i denote the transaction cost rate for stock S i , organized into a vector α = [ α 1 , α 2 , , α N ] . In our experiments the entries of α are drawn uniformly and scaled so that i α i = 0.1 , yielding an average 10% surcharge across the universe. The total cost for establishing portfolio t is
Transaction Cost = α t
Since the investor must pay both for the shares and the transaction costs, the budget constraint becomes
( 1 + α ) t = 1
This replaces the simple budget constraint 1 t = 1 in the frictionless Markowitz model.

3.2.3. Cardinality Constraint

Investors often wish to limit the number of stocks held in a portfolio to reduce monitoring costs, improve interpretability, and comply with regulatory requirements. The cardinality constraint specifies that at most k stocks may receive nonzero allocation, where k < N .
To formulate this constraint, we introduce binary decision variables z = [ z 1 , z 2 , , z N ] where z i = 1 if stock S i is included in the portfolio and z i = 0 otherwise. The cardinality constraint is then
i = 1 N z i = k or equivalently 1 z = k
We must also ensure that stocks excluded from the portfolio receive zero allocation, which is enforced by coupling the continuous allocation variables t with the binary selection variables z through
0 t i z i , i = 1 , 2 , , N
When z i = 0 , this forces t i = 0 , while z i = 1 allows t i [ 0 , 1 ] .

3.2.4. Unified Optimization Formulation

We adopt an objective function that incorporates both risk minimization and return maximization rather than fixing a target return. Specifically, we minimize a weighted combination of variance and negative return:
E ( t ) = t Σ t Λ t μ
where the parameter Λ 0 controls the risk-return tradeoff. Larger Λ places more emphasis on maximizing returns, while smaller values prioritize risk reduction. The negative sign on the return term ensures that minimizing E corresponds to maximizing returns.
Combining the objective (15) with all constraints, the complete portfolio optimization problem is
minimize t , z t Σ t Λ t μ subject to ( 1 + α ) t = 1 0 t i z i , i = 1 , , N 1 z = k z { 0 , 1 } 1 × N
where the minimization is performed over both the continuous portfolio weights t and the binary selection variables z . The objective function depends on t through the quadratic risk term t Σ t and the linear return term t μ , while z enters only through the constraints.
This formulation presents significant computational challenges. The binary variables z make it a mixed-integer nonlinear program, which is NP-hard in general. Traditional methods such as branch-and-bound become expensive as N grows, motivating alternative approaches.
The key to enabling SNN-based solution is reformulating this mixed-integer problem as a constrained quadratic program. In Section 4, we relax the binary constraint z { 0 , 1 } 1 × N to a continuous box z [ 0 , 1 ] 1 × N during optimization, express all constraints as linear inequalities compatible with the projection mechanism (5), and leverage SNN dynamics to maintain feasibility. After the dynamics settle, a deterministic post-processing step reinstates discrete portfolios satisfying the cardinality limit, as described in Section 4.

4. SNN-Inspired Optimization Method

This section describes our implementation of the SNN-inspired solver for portfolio optimization. We first show how the mixed-integer Markowitz model from Section 3 is reformulated as a constrained quadratic program amenable to SNN dynamics. We then describe the solver algorithm that alternates between continuous gradient integration and discrete projection spikes, detail the post-processing procedure that recovers discrete portfolios, and discuss the computational characteristics that make the approach suitable for neuromorphic hardware deployment.

4.1. Quadratic Program Reformulation

To enable SNN-based solution, we reformulate the mixed-integer problem (16) as a constrained quadratic program. The solver accepts problems of the form
minimize y 1 2 y A y + b y subject to C y + d 0 ,
where y R 2 N contains both portfolio weights and relaxed binary selectors stacked as
y = t z .
The binary constraint z { 0 , 1 } N is relaxed to z [ 0 , 1 ] N during optimization; discrete portfolios are recovered through post-processing (Section 4).
The Hessian and linear term encode the Markowitz objective (15) with risk-return trade-off parameter Λ :
A = 2 Σ 0 0 0 , b = Λ μ 0 .
The block structure ensures that only portfolio weights t appear in the quadratic term (through Σ ), while relaxed selectors z enter only via constraints.
All constraints from (16) are expressed as linear inequalities compatible with the projection mechanism (5). The budget constraint ( 1 + α ) t = 1 becomes two inequalities with tolerance ϵ budget ; box constraints 0 t i z i and 0 z i 1 contribute 4 N rows; and the cardinality requirement 1 z = k is enforced with upper and lower bounds using tolerance ϵ card . The resulting constraint matrix C R ( 4 N + 4 ) × 2 N is sparse and problem-specific, so we precompute it once per scenario along with offsets d . This allows rapid re-instantiation for different ( k 0 , k 1 ) parameter sweeps.

4.2. SNN Dynamics and Projection Algorithm

The core solver implements the alternating gradient-and-spike dynamics described in Section 3. Algorithm 1 summarizes the main loop. The configuration is specified by parameters ( k 0 , k 1 , t max , h max , ϵ tol ) , where k 0 controls the gradient descent rate, k 1 sets the projection spike magnitude, t max is the simulation time horizon, h max caps the ODE integration step size, and ϵ tol is the constraint satisfaction tolerance.
Between projection events, the state evolves according to the gradient flow
d y d t = k 0 ( A y + b ) .
Due to the block structure (19), this updates only the portfolio weights t while leaving relaxed selectors z unchanged during continuous descent. We integrate using an adaptive Runge–Kutta RK45 scheme with event detection to halt when constraint violations are imminent. The maximum step size h max = 0.1 ensures smooth trajectories and simplifies spike time reconstruction.
Each integration step produces a trajectory segment ( t seg , y seg ) containing the time points and state values along the continuous gradient descent path. The segment terminates either when a constraint violation is detected (triggering a return to the projection phase) or when the integration step completes successfully. The algorithm appends each segment to the full solution trajectory and updates the current state to the final point of the segment: t t seg [ 1 ] and y y seg [ 1 ] , where the [ 1 ] notation denotes the last element of the segment arrays. This piecewise trajectory construction allows the solver to maintain a complete record of the optimization path while handling the alternating continuous-discrete dynamics efficiently.
When the event detector identifies that max i [ C y + d ] i approaches zero (within ϵ tol ), the solver applies a discrete projection spike:
y y k 1 C 1 V ,
where 1 V selects the currently violated constraint rows. Unlike analytical projection onto the constraint boundary, this correction moves along signed constraint normals with fixed magnitude k 1 , directly mirroring the reset behavior of leaky integrate-and-fire neurons. The projection is iterated (up to N max = 100 iterations) until all constraints satisfy C y + d ϵ tol . We record spike metadata (timestamps, active constraints, displacement norms) for diagnostic analysis in Section 5.
Integration terminates when either simulated time reaches t max or the gradient norm falls below 10 8 . Solutions that terminate exactly at t max are flagged as time-horizon limited, indicating that longer simulation might yield further improvement.
Algorithm 1 SNN-inspired portfolio optimization algorithm
Input: 
Problem matrices ( A , b , C , d ) , solver parameters ( k 0 , k 1 , t max , h max , ϵ tol ) , feasible initial state x 0
  1:
t 0 , x x 0
  2:
while  t < t max and gradient norm > 10 8  do
  3:
       x ProjectToFeasible ( x , k 1 , ϵ tol )                                           ▹Apply (21) until feasible
  4:
      Integrate x ˙ = k 0 ( A x + b ) with event detection for constraint violations
  5:
      Append ( t seg , x seg ) to solution trajectory
  6:
       t t seg [ 1 ] , x x seg [ 1 ]
  7:
      if trajectory stalled at boundary then
  8:
            break
  9:
      end if
10:
end while
11:
return trajectory, spike metadata, final solution x
Figure 1 provides a visual overview of the complete SNN optimization workflow, illustrating the key algorithmic components and their interactions. The flowchart highlights the alternating structure between continuous gradient descent phases and discrete projection spike events, emphasizing the event-driven nature of constraint enforcement. The main optimization loop continues until convergence criteria are met, with trajectory segments recorded throughout the process. The final post-processing step deterministically recovers a discrete portfolio satisfying the cardinality constraint, completing the transformation from relaxed continuous solution to implementable discrete allocation. This visual representation complements Algorithm 1 by showing the dynamic flow and decision points that govern the solver’s behavior.

4.3. Discrete Portfolio Recovery and Validation

After the SNN dynamics converge, we recover a discrete portfolio that satisfies the original cardinality constraint 1 z = k . The relaxed selector values z produced by the solver are treated as continuous scores indicating asset importance. We select the k assets with the largest z i values, zero out weights for unselected assets, clip the remaining weights to [ 0 , ) , and renormalize to satisfy the budget constraint with transaction costs:
i S ( 1 + α i ) t i = 1 ,
where S denotes the set of selected assets. This deterministic top-k recovery procedure is applied uniformly across all experiments.
The binary relaxation strategy is theoretically justified by convex relaxation theory for cardinality-constrained optimization [43,44]. When the relaxed selectors z exhibit strong bimodality (many values near 0 or 1), the continuous relaxation provides a good approximation to the discrete problem, and the top-k recovery procedure reliably identifies the optimal discrete solution. This occurs because the optimization naturally drives selector values toward the boundaries, with assets that contribute strongly to the objective receiving values near 1, while less important assets converge toward 0. The top-k selection based on relaxed selector magnitudes effectively captures this importance ranking. This approach is well-established in sparse optimization literature [42], where convex relaxations of cardinality constraints (such as 1 regularization) followed by thresholding provide near-optimal solutions when the solution exhibits sparsity structure. The relaxation strategy may perform less well when many selector values cluster near 0.5, indicating ambiguity in asset selection, but empirical results demonstrate that SNN dynamics naturally promote bimodal distributions, making the recovery procedure effective across all tested scenarios.
Post-processing constraint residuals are computed and recorded alongside objective value, expected return, variance, and spike statistics in the experimental summaries. This validation ensures consistency across both parameter sweeps and baseline comparisons.

4.4. Parameter Configuration and Tolerance Settings

Section 5 explores solver configurations ( k 0 , k 1 , t max ) across four portfolio sizes. Conservative schedules such as ( k 0 , k 1 ) = ( 0.02 , 0.05 ) prioritize stability for small portfolios ( N { 5 , 10 } ), while more aggressive settings like ( 0.007 , 0.75 ) are explored for the largest case ( N = 50 ).
Constraint tolerance settings are adapted to problem size. For small portfolios ( N < 20 ), we use tight tolerances ( ϵ budget , ϵ card ) = ( 10 6 , 10 6 ) to enforce near-exact constraint satisfaction. For larger portfolios ( N 20 ), these are relaxed to ( 0.2 , 0.5 ) to accommodate more aggressive projection gains k 1 while maintaining numerical stability. The projection tolerance ϵ tol is fixed at 10 10 throughout, and the maximum integration step size h max = 0.1 for all experiments.

4.5. Computational Characteristics and Hardware Deployment

The solver’s computational profile makes it well-suited for neuromorphic hardware deployment. Each projection spike requires a sparse matrix-vector multiplication with C costing O ( N ) operations due to the block structure, while gradient evaluations cost O ( N 2 ) due to the dense covariance matrix Σ . Empirically, we observe fewer than three projection iterations per spike event and fewer than 2 × 10 5 gradient evaluations even for the largest portfolios ( N = 50 ), keeping total solve times within single-digit seconds on conventional CPUs.
The algorithmic primitives: vector-matrix products, element-wise comparisons, and fixed-magnitude updates, map directly to the computational model of neuromorphic processors. Unlike traditional optimization methods requiring matrix factorizations or complex data structures, the SNN dynamics operate through simple local computations that can be massively parallelize. Specialized hardware such as Intel Loihi [16] or SpiNNaker2 [18] implements these operations in event-driven, low-power circuits designed specifically for spiking neural network execution.
Our Python implementation serves as a proof of concept demonstrating solution quality and algorithmic feasibility. It is important to note that neuromorphic computing represents an emerging technology, and the primary contribution of this work lies in developing an SNN-friendly problem formulation and algorithm that maps naturally to spiking neural network dynamics. Our current implementation simulates the SNN mechanics on conventional computing hardware (standard CPUs) to validate the algorithmic approach and demonstrate competitive solution quality. While the algorithmic primitives: vector operations, threshold comparisons, and fixed-magnitude updates map directly to the computational model of neuromorphic processors [16,17], actual hardware implementation and quantitative performance benchmarking on specialized neuromorphic platforms remain important directions for future research. The path to deployment would require mapping the constraint matrix C to neuron connectivity patterns, encoding the gradient A y + b as input currents, and optimizing the spike routing and synaptic weight representation for specific hardware architectures. Such implementation work would enable quantitative assessment of speedup factors, energy efficiency gains, and real-time performance characteristics, providing concrete evidence for the hardware acceleration potential suggested by the algorithmic structure.

5. Experimental Results

This section presents comprehensive empirical validation of the SNN-inspired portfolio optimization method. We describe the dataset and experimental design, analyze parameter sweeps across portfolio sizes to characterize solution quality and computational requirements, and compare against established baselines including exact mixed-integer programming, convex relaxation, and particle swarm optimization. The results demonstrate that the SNN approach achieves competitive solution quality while establishing a foundation for neuromorphic hardware deployment.

5.1. Experimental Setup

5.1.1. Computational Environment

All experiments were conducted on an Apple MacBook Pro with Apple M1 chip (8-core CPU), 16 GB unified memory, running macOS. The implementation uses Python 3.12 with key scientific computing libraries including NumPy, SciPy, CVXPY, and PySwarms. All computations were executed in single-threaded mode to ensure fair comparison across methods and to reflect the sequential nature of the current SNN simulation implementation. The reported runtimes should be interpreted as baseline performance on consumer-grade hardware; specialized neuromorphic processors would be expected to achieve different performance characteristics due to their event-driven, massively parallel architecture. Results should be reproducible on similar hardware configurations, though exact runtimes may vary based on system load and Python version differences.

5.1.2. Dataset and Financial Parameters

We evaluate the method using five years of daily closing prices for 100 liquid equities from major global indices, spanning technology, finance, consumer, energy, industrial, and healthcare sectors. Stock price data were obtained from the Yahoo Finance API, covering adjusted closing prices from October 2019 to October 2024, resulting in approximately 1260 trading days per stock after excluding weekends and market holidays. The dataset includes stocks from major global exchanges, selected to represent diverse sectors and market capitalizations. Ticker symbols were chosen to ensure liquidity and data availability throughout the entire five-year period. This universe captures the heterogeneous dependency structures typical of institutional portfolios. For the largest portfolio scenario ( N = 50 ), we randomly selected 50 stocks from the full 100-stock dataset using a fixed random seed (seed = 42) to ensure all experiments use the same subset and enable fair comparison across methods. This random selection procedure is applied consistently across all baseline comparisons, ensuring that MIQP, QP + 1 , PSO, and SNN solvers all operate on identical problem instances. From historical prices we compute mean returns μ R 1 × N and the covariance matrix Σ R N × N using standard sample estimators (sample mean and sample covariance matrix).
Transaction costs are modeled as proportional fees with a random vector α R 1 × N scaled to yield an aggregate 10% surcharge: 1 α = 0.1 . This approximates realistic trading costs for liquid equities. The transaction cost vector is generated using a Dirichlet distribution with uniform parameters, ensuring positive costs that sum to the target total. The random seed for transaction cost generation is fixed (seed = 42) to ensure reproducibility across experiments.
We evaluate four scenarios with increasing problem size:
  • N = 5 , cardinality k = 3
  • N = 10 , cardinality k = 5
  • N = 20 , cardinality k = 10
  • N = 50 , cardinality k = 20
For each scenario, we extract the corresponding subset from the full dataset and construct reduced mean and covariance matrices. The risk-return trade-off parameter is fixed at Λ = 1 across all experiments.

5.1.3. Baseline Methods

We compare against three established approaches implemented in the same Python environment:
  • Mixed-Integer Quadratic Programming (MIQP): Exact solution via ECOS_BB through CVXPY, providing optimality certificates.
  • Convex Relaxation (QP + 1 ): Continuous relaxation solved with OSQP followed by top-k asset selection.
  • Particle Swarm Optimization (PSO): Metaheuristic search implemented with PySwarms using 64–96 particles.
All methods share the same data pipeline and post-processing (Section 4), ensuring fair comparison.

5.1.4. Hyperparameter Configuration

Hyperparameter selection for each method follows established practices and problem-size considerations. For the SNN solver, parameter schedules ( k 0 , k 1 ) are selected through systematic sweeps as described in Section 4, with conservative values for small portfolios and more aggressive settings for larger problems to balance convergence speed and solution quality. The MIQP solver (ECOS_BB) uses default tolerance settings from CVXPY, with optimality tolerance set to 10 6 and no explicit time limit. The convex relaxation approach (QP + 1 ) employs OSQP with regularization parameter λ 1 = 0.02 selected through preliminary experiments to balance sparsity promotion and solution quality, and convergence tolerance of 10 6 . The PSO implementation uses swarm sizes of 64 particles for small portfolios ( N 10 ) and 96 particles for larger problems, with inertia weight 0.75, cognitive parameter 1.4, and social parameter 1.4 following standard recommendations. PSO iterations are set to 100 for small portfolios and 150 for larger problems to ensure convergence. All hyperparameters are fixed across experiments within each method to ensure fair comparison, with selection based on problem characteristics and standard practices from the optimization literature.

5.1.5. SNN Parameter Selection and Recommended Values

The SNN solver requires selection of two primary parameters ( k 0 , k 1 ) that control the gradient descent rate and projection spike magnitude, respectively. Based on our comprehensive parameter study across four portfolio sizes, we provide systematic guidance for parameter selection and recommended values for different problem scales.
Parameter Roles and Physical Interpretation: The gradient step size k 0 controls how aggressively the solver descends along the objective function gradient. Larger values ( k 0 > 0.05 ) accelerate convergence but may overshoot optimal solutions, while smaller values ( k 0 < 0.01 ) ensure stability but require longer simulation times. The projection step size k 1 determines the magnitude of constraint correction spikes. Larger values ( k 1 > 0.5 ) enforce constraints more aggressively but may cause oscillations, while smaller values ( k 1 < 0.1 ) provide smoother trajectories but slower constraint satisfaction. The interaction between these parameters creates distinct operational regimes: fast schedules prioritize low runtimes for latency-sensitive applications, while high-quality schedules maximize solution quality at the cost of longer computation.
Problem-Size Based Selection Methodology: Our parameter study reveals that optimal parameter selection depends strongly on portfolio size. For small portfolios ( N { 5 , 10 } ), conservative schedules such as ( k 0 , k 1 ) = ( 0.02 , 0.05 ) or ( 0.05 , 0.1 ) provide stable convergence within one second while achieving competitive returns. For medium-sized portfolios ( N = 20 ), we recommend moderate settings like ( 0.05 , 0.1 ) for balanced performance, or more aggressive schedules like ( 0.035 , 0.45 ) for higher returns when runtime is less critical. For large portfolios ( N = 50 ), the parameter space exhibits clear trade-offs: fast schedules such as ( 0.12 , 0.02 ) terminate in under 0.2 s but achieve lower returns (0.118% daily), while high-quality schedules like ( 0.007 , 0.75 ) require 8–9 s but achieve the highest returns (0.261% daily) among all evaluated methods.
Recommended Parameter Values: For latency-critical applications requiring sub-second runtimes, we recommend fast schedules: ( 0.12 , 0.02 ) for large portfolios, ( 0.20 , 0.50 ) for medium portfolios, and ( 0.05 , 0.1 ) for small portfolios. For applications prioritizing solution quality, we recommend high-quality schedules: ( 0.007 , 0.75 ) for large portfolios, ( 0.035 , 0.45 ) for medium portfolios, and ( 0.05 , 0.1 ) for small portfolios. Balanced schedules provide a middle ground, achieving competitive returns with moderate runtimes suitable for most practical applications.
Sensitivity Analysis: Our parameter sweeps demonstrate that solution quality is more sensitive to k 1 (projection magnitude) than k 0 (gradient rate) for large portfolios, while both parameters have comparable influence for small portfolios. The constraint tolerance settings ( ϵ budget , ϵ card ) should be tightened to ( 10 6 , 10 6 ) for small portfolios to ensure near-exact constraint satisfaction, while relaxed tolerances ( 0.2 , 0.5 ) are appropriate for larger portfolios to accommodate aggressive projection gains. The simulation time horizon t max should be set to 1000 for small portfolios, 5000 for medium portfolios, and 10,000 for large portfolios based on observed convergence patterns.
Practical Guidelines: For practitioners applying the SNN solver to new portfolio optimization problems, we recommend starting with conservative parameter values (e.g., ( 0.05 , 0.1 ) for small portfolios, ( 0.05 , 0.1 ) for medium portfolios, or ( 0.12 , 0.02 ) for large portfolios) and performing a small parameter sweep around these values to identify the optimal trade-off between runtime and solution quality for their specific problem characteristics. The systematic parameter study presented in Section 5 demonstrates that the SNN solver exhibits smooth parameter sensitivity, making fine-tuning straightforward. The two-parameter structure ( k 0 , k 1 ) simplifies parameter selection compared to metaheuristic methods [7] that require tuning multiple population and mutation parameters.

5.2. Parameter Study Across Portfolio Sizes

We systematically explore solver configurations ( k 0 , k 1 ) to characterize the trade-off between solution quality and computational cost. For small portfolios ( N { 5 , 10 } ), we test ( k 0 , k 1 ) { 0.02 , 0.05 } × { 0.05 , 0.1 } . For N = 20 , we evaluate six configurations ranging from conservative ( 0.05 , 0.1 ) to aggressive ( 0.20 , 1.5 ) . For the largest case ( N = 50 ), we explore six schedules from fast ( 0.12 , 0.02 ) to high-quality ( 0.007 , 0.75 ) . Constraint tolerances are set to ( ϵ budget , ϵ card ) = ( 10 6 , 10 6 ) for N < 20 and relaxed to ( 0.2 , 0.5 ) for larger problems as described in Section 4.
Table 3 summarizes key metrics for all configurations. For small portfolios, all schedules converge within one second and produce similar post-processed portfolios. Increasing k 0 from 0.02 to 0.05 improves expected return from 0.156% to 0.163% daily. For N = 20 , more pronounced differences emerge: aggressive settings like ( 0.035 , 0.45 ) achieve 0.260% daily return but accumulate larger transient violations, while conservative ( 0.05 , 0.1 ) maintains tighter feasibility with 0.106% return. The largest portfolio ( N = 50 ) shows a clear hierarchy: ( 0.007 , 0.75 ) delivers the highest return (0.261%) in 8.2 s, while ( 0.12 , 0.02 ) terminates in 0.11 s but achieves only 0.118% return.
Figure 2 and Figure 3 visualize these results. The trajectory panels demonstrate convergence patterns across all problem sizes, showing that post-processing successfully recovers feasible portfolios even when transient violations occur. The scaling analysis reveals smooth runtime growth from 0.5 s for small portfolios to 8.4 s for conservative large-portfolio schedules. Larger problems exhibit tighter clustering on the Pareto frontier compared to smaller universes with flatter efficient frontiers.
These results demonstrate two operational regimes: fast schedules suitable for latency-sensitive applications (sub-second runtimes) and high-quality schedules that maximize returns at the cost of longer computation. Critically, all reported runtimes reflect Python CPU simulation, while neuromorphic hardware deployment would compress these timings by orders of magnitude while preserving solution quality.

5.3. Baseline Comparison

We compare the SNN solver against established methods on the largest portfolio ( N = 50 , k = 20 ). Table 4 reports performance metrics. The MIQP solver (ECOS_BB) provides the optimality benchmark, requiring 1.62 s to exhaustively explore the discrete solution space. The convex relaxation (QP + 1 with OSQP) achieves competitive risk-return profiles in just 19 milliseconds by solving a continuous approximation. PSO completes in 80 milliseconds but sacrifices solution quality, achieving only 0.092% daily return compared to 0.225% for MIQP. The SNN solver at ( k 0 , k 1 ) = ( 0.007 , 0.75 ) achieves the highest expected return (0.261%) among all methods, though with higher variance (0.00108) compared to MIQP (0.000433).
Figure 4 and Figure 5 position the SNN approach on the Pareto frontier: it dominates PSO in solution quality while remaining within an order of magnitude of MIQP runtime. Figure 6 reveals moderate overlap in asset selections across methods, with Jaccard similarities ranging from 0.14 to 0.29. This moderate overlap reflects the inherent multiplicity of near-optimal solutions in portfolio optimization: different methods converge to different asset combinations that achieve similar risk-return profiles. The SNN solver exhibits Jaccard similarity of 0.29 with both MIQP and PSO (sharing 9 common assets), and 0.18 with QP + 1 (sharing 6 common assets). While the selected asset sets differ, all methods achieve competitive objective values, demonstrating that the SNN post-processing successfully identifies feasible discrete portfolios within the efficient frontier.
The key insight is that while current Python runtimes are comparable to conventional solvers, the SNN formulation uniquely enables neuromorphic hardware deployment. Unlike MIQP (which requires branch-and-bound) or PSO (which requires extensive population evaluations), the SNN dynamics map directly to specialized hardware architectures, offering a clear path to orders-of-magnitude acceleration.

5.4. Discussion

The experimental results establish that the SNN-inspired approach achieves solution quality competitive with exact methods while maintaining computational efficiency comparable to heuristics. The parameter study reveals two practical operating regimes: fast schedules delivering sub-second runtimes for latency-critical applications, and high-quality schedules that maximize returns in several seconds. Post-processing successfully enforces discrete cardinality constraints even when transient violations occur during continuous optimization, validating the relaxation strategy described in Section 4.
The baseline comparison demonstrates that the SNN solver matches MIQP solution quality while avoiding the exponential scaling inherent in branch-and-bound methods. Unlike PSO, which lacks convergence guarantees, the SNN dynamics build on the theoretical foundation of projected gradient descent for constrained quadratic programs. The moderate overlap in asset selections (Jaccard similarity 0.14–0.29) reflects the multiplicity of near-optimal solutions: different optimization paths converge to different asset combinations with similar risk-return characteristics. This diversity is expected in portfolio optimization, where the efficient frontier contains many portfolios with comparable objectives. The top-k recovery procedure reliably identifies feasible discrete portfolios, though the specific asset mix depends on the optimization trajectory.
An interesting finding from the baseline comparison is that the SNN solver achieves a higher expected return (0.261% daily) than the exact MIQP solver (0.225%) on the 50-asset portfolio. This performance difference can be understood through several factors. First, MIQP enforces strict discrete constraints throughout optimization via branch-and-bound, which may converge to a local optimum or terminate early due to computational limits. In contrast, the SNN approach initially solves a relaxed continuous problem and enforces discrete constraints only in post-processing, allowing the optimization trajectory to explore a broader solution space before discrete recovery. Second, the higher return achieved by SNN comes with increased risk: SNN exhibits variance of 0.00108 compared to MIQP’s 0.000433, reflecting a different point on the risk-return trade-off curve. Both methods converge to valid near-optimal solutions, but they represent different points on the efficient frontier. This demonstrates the strength of the SNN approach: by relaxing constraints during optimization and recovering discrete solutions through post-processing, it can identify portfolios that achieve competitive or superior returns while maintaining computational tractability. The different constraint handling strategies (strict enforcement vs. relaxation-then-recovery) naturally lead to different local minima, both of which satisfy the problem constraints and represent valid optimization outcomes. We emphasize that all experiments reported here use the same randomly selected subset of 50 stocks from the 100-stock dataset (selected with seed = 42), ensuring that performance differences reflect algorithmic characteristics rather than data selection effects.
Critically, all reported runtimes represent Python CPU simulation using standard numerical libraries. The algorithmic primitives: vector operations, comparisons, and fixed-magnitude updates, map naturally to neuromorphic processor architectures. Deployment on specialized hardware such as Intel Loihi or SpiNNaker2 would leverage massively parallel, event-driven computation to achieve orders-of-magnitude speedup, enabling real-time portfolio rebalancing at institutional scales. This hardware acceleration path distinguishes the SNN approach from both exact solvers (which require sequential branching) and metaheuristics (which lack hardware mapping).

6. Conclusions

This paper demonstrates that portfolio optimization with transaction costs and cardinality constraints can be solved using dynamics inspired by spiking neural networks. We reformulated the constrained Markowitz problem as a quadratic program compatible with leaky integrate-and-fire neuron dynamics, implemented an algorithm that alternates between continuous gradient descent and discrete projection spikes, and validated the approach through comprehensive experiments on real equity data. The experimental results establish that the SNN-inspired solver achieves solution quality that is competitive with, and in some cases exceeds, the performance of exact mixed-integer programming methods. On the 50-asset portfolio, the SNN solver achieves the highest expected return (0.261% daily) among all evaluated methods, outperforming exact MIQP (0.225%) and PSO (0.092%), while maintaining computational efficiency with runtimes ranging from 0.5 s for small portfolios to 8.4 s for high-quality schedules on large portfolios. The parameter study reveals two operational regimes: fast schedules delivering sub-second runtimes for latency-critical applications, and high-quality schedules maximizing returns at the cost of longer computation. Our systematic comparison with existing optimization methods (Section 2.4) clarifies the distinctive advantages of the SNN approach: unlike exact MIQP solvers that require sequential branch-and-bound operations, the SNN dynamics enable parallel gradient-projection computation; unlike convex relaxation methods that lack principled constraint enforcement, the SNN approach integrates constraint satisfaction directly into optimization dynamics; and unlike metaheuristic methods that require extensive parameter tuning, the SNN solver requires only two key parameters ( k 0 , k 1 ) with clear physical interpretations and systematic selection guidelines (Section 5.1.5). The primary contribution extends beyond current Python performance: by establishing the mathematical connection between constrained optimization and SNN dynamics, we provide a foundation for neuromorphic hardware deployment that could deliver orders-of-magnitude acceleration. The algorithmic primitives: vector operations, threshold comparisons, and fixed-magnitude updates that map naturally to specialized processors such as Intel Loihi and SpiNNaker2, distinguishing this approach from exact solvers (which require sequential branching) and metaheuristics (which lack hardware mapping). Neuromorphic deployment would enable real-time portfolio rebalancing at institutional scales currently impractical for conventional methods.
Future work includes neuromorphic hardware implementation (mapping constraint matrices to connectivity patterns), formal convergence analysis, extensions to complex constraints (lot sizes, ESG criteria), adaptive parameter selection, and integration with online learning for streaming market data. A deeper analysis of the performance differences between the SNN solver and exact MIQP solvers would provide valuable insights into the optimization landscape, investigating when and why approximation methods can outperform exact solvers, characterizing the local and global optima structure, and understanding how the relaxation-then-recovery strategy affects solution quality across different problem instances.The SNN optimization framework extends beyond portfolio optimization to any constrained quadratic programming problem with linear inequality constraints, including resource allocation, network optimization, and sensor selection problems. Practitioners can adapt the method by reformulating their problem in the standard form (17), constructing sparse constraint matrices, and following the parameter selection guidelines in Section 5.1.5. By bridging neural dynamics, convex optimization, and financial applications, this work opens pathways for efficient real-time decision-making systems grounded in theoretical principles and hardware reality.

Author Contributions

Conceptualization, S.L.; Methodology, A.H.K. and S.L.; Software, A.H.K. and A.M.M.; Validation, A.H.K.; Formal analysis, S.L.; Data curation, A.H.K.; Writing—original draft, A.H.K.; Writing—review & editing, A.M.M. and S.L.; Visualization, A.M.M.; Supervision, S.L.; Project administration, A.M.M. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

Abbreviations:
SNNSpiking Neural Network
LIFLeaky Integrate-and-Fire
MIQPMixed-Integer Quadratic Program
PSOParticle Swarm Optimization
QPQuadratic Program
ODEOrdinary Differential Equation
ECOS_BBEmbedded Conic Solver Branch-and-Bound
OSQPOperator Splitting Quadratic Program
Mathematical Symbols:
V m Membrane potential
V th Threshold voltage
τ m Membrane time constant
t Portfolio weights vector
z Binary selection variables
μ Mean return vector
Σ Covariance matrix
kCardinality constraint
NNumber of stocks
C Constraint matrix
k 0 Gradient descent step size ( k 0 )
k 1 Projection step size ( k 1 )

References

  1. Markowitz, H. Portfolio Selection. J. Financ. 1952, 7, 77. [Google Scholar] [CrossRef]
  2. Davis, M.H.A.; Norman, A.R. Portfolio Selection with Transaction Costs. Math. Oper. Res. 1990, 15, 676–713. [Google Scholar] [CrossRef]
  3. Bienstock, D. Computational study of a family of mixed-integer quadratic programming problems. Math. Program. 1996, 74, 121–140. [Google Scholar] [CrossRef]
  4. Chang, T.J.; Meade, N.; Beasley, J.; Sharaiha, Y. Heuristics for cardinality constrained portfolio optimisation. Comput. Oper. Res. 2000, 27, 1271–1302. [Google Scholar] [CrossRef]
  5. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  6. Cura, T. Particle swarm optimization approach to portfolio optimization. Nonlinear Anal. Real World Appl. 2009, 10, 2396–2406. [Google Scholar] [CrossRef]
  7. Khan, A.H.; Cao, X.; Katsikis, V.N.; Stanimirovic, P.; Brajevic, I.; Li, S.; Kadry, S.; Nam, Y. Optimal Portfolio Management for Engineering Problems Using Nonconvex Cardinality Constraint: A Computing Perspective. IEEE Access 2020, 8, 57437–57450. [Google Scholar] [CrossRef]
  8. Khan, A.H.; Cao, X.; Luo, C.; Zhang, S.; Guo, W.; Katsikis, V.N.; Li, S. Spiking Neural Networks: A Comprehensive Survey of Training Methodologies, Hardware Implementations and Applications. Artif. Intell. Sci. Eng. 2025, 1, 175–207. [Google Scholar] [CrossRef]
  9. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  10. Hopfield, J.J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 1984, 81, 3088–3092. [Google Scholar] [CrossRef]
  11. Hopfield, J.J.; Tank, D.W. “Neural” computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef]
  12. Barrett, D.G.; Denève, S.; Machens, C.K. Optimal compensation for neuron loss. eLife 2016, 5, e12454. [Google Scholar] [CrossRef]
  13. Boerlin, M.; Machens, C.K.; Denève, S. Predictive Coding of Dynamical Variables in Balanced Spiking Networks. PLoS Comput. Biol. 2013, 9, e1003258. [Google Scholar] [CrossRef] [PubMed]
  14. Mancoo, A.; Keemink, S.; Machens, C.K. Understanding spiking networks through convex optimization. Adv. Neural Inf. Process. Syst. 2020, 33, 8824–8835. [Google Scholar]
  15. Stanojevic, A.; Woźniak, S.; Bellec, G.; Cherubini, G.; Pantazi, A.; Gerstner, W. High-performance deep spiking neural networks with 0.3 spikes per neuron. Nat. Commun. 2024, 15, 6793. [Google Scholar] [CrossRef] [PubMed]
  16. Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  17. Orchard, G.; Frady, E.P.; Rubin, D.B.D.; Sanborn, S.; Shrestha, S.B.; Sommer, F.T.; Davies, M. Efficient Neuromorphic Signal Processing with Loihi 2. In Proceedings of the 2021 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 19–21 October 2021. [Google Scholar] [CrossRef]
  18. Furber, S.B.; Galluppi, F.; Temple, S.; Plana, L.A. The SpiNNaker Project. Proc. IEEE 2014, 102, 652–665. [Google Scholar] [CrossRef]
  19. Merolla, P.A.; Arthur, J.V.; Alvarez-Icaza, R.; Cassidy, A.S.; Sawada, J.; Akopyan, F.; Jackson, B.L.; Imam, N.; Guo, C.; Nakamura, Y.; et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 2014, 345, 668–673. [Google Scholar] [CrossRef]
  20. Xia, Y.; Leung, H.; Wang, J. A projection neural network and its application to constrained optimization problems. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2002, 49, 447–458. [Google Scholar] [CrossRef]
  21. Xia, Y.; Wang, J. A General Projection Neural Network for Solving Monotone Variational Inequalities and Related Optimization Problems. IEEE Trans. Neural Netw. 2004, 15, 318–328. [Google Scholar] [CrossRef]
  22. Liu, Q.; Wang, J. A One-Layer Projection Neural Network for Nonsmooth Optimization Subject to Linear Equalities and Bound Constraints. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 812–824. [Google Scholar] [CrossRef]
  23. Perin, G.; Bidini, C.; Mazzieri, R.; Rossi, M. ADMM-Based Training for Spiking Neural Networks. arXiv 2025, arXiv:2505.05527. [Google Scholar] [CrossRef]
  24. Jin, L.; Li, S.; Luo, X.; Li, Y.; Qin, B. Neural Dynamics for Cooperative Control of Redundant Robot Manipulators. IEEE Trans. Ind. Inform. 2018, 14, 3812–3821. [Google Scholar] [CrossRef]
  25. Lagunas, E.; Ortiz, F.; Eappen, G.; Daoud, S.; Martins, W.A.; Querol, J.; Chatzinotas, S.; Skatchkovsky, N.; Rajendran, B.; Simeone, O. Performance Evaluation of Neuromorphic Hardware for Onboard Satellite Communication Applications. arXiv 2024, arXiv:2401.06911. [Google Scholar] [CrossRef]
  26. Kartashov, I.; Pushkareva, M.; Karandashev, I. SpikeFit: Towards Optimal Deployment of Spiking Networks on Neuromorphic Hardware. arXiv 2025, arXiv:2510.15542. [Google Scholar] [CrossRef]
  27. Schemmel, J.; Briiderle, D.; Griibl, A.; Hock, M.; Meier, K.; Millner, S. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010. [Google Scholar] [CrossRef]
  28. Salo, A.; Doumpos, M.; Liesiö, J.; Zopounidis, C. Fifty years of portfolio optimization. Eur. J. Oper. Res. 2024, 318, 1–18. [Google Scholar] [CrossRef]
  29. DeMiguel, V.; Nogales, F.J. Portfolio Selection with Robust Estimation. Oper. Res. 2009, 57, 560–577. [Google Scholar] [CrossRef]
  30. Qiu, H.; Han, F.; Liu, H.; Caffo, B. Robust portfolio optimization. Adv. Neural Inf. Process. Syst. 2015, 28, 46–54. [Google Scholar]
  31. Chen, R.; Yu, L. A novel nonlinear value-at-risk method for modeling risk of option portfolio with multivariate mixture of normal distributions. Econ. Model. 2013, 35, 796–804. [Google Scholar] [CrossRef]
  32. Khan, A.H.; Cao, X.; Li, S.; Katsikis, V.N.; Liao, L. BAS-ADAM: An ADAM based approach to improve the performance of beetle antennae search optimizer. IEEE CAA J. Autom. Sin. 2020, 7, 461–471. [Google Scholar] [CrossRef]
  33. Khan, A.H.; Cao, X.; Li, S.; Luo, C. Using Social Behavior of Beetles to Establish a Computational Model for Operational Management. IEEE Trans. Comput. Soc. Syst. 2020, 7, 492–502. [Google Scholar] [CrossRef]
  34. Chen, W.; Zhang, H.; Mehlawat, M.K.; Jia, L. Mean–variance portfolio optimization using machine learning-based stock price prediction. Appl. Soft Comput. 2021, 100, 106943. [Google Scholar] [CrossRef]
  35. Shoaf, J.M.; Foster, J.A. Behavior of a genetic algorithm used for yield optimization. In Proceedings of the International Conference on Microelectronic Test Structures, Monterey, CA, USA, 17–20 March 1997; pp. 203–207. [Google Scholar]
  36. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  37. Jiang, X.; Li, S. Beetle Antennae Search without Parameter Tuning (BAS-WPT) for Multi-objective Optimization. arXiv 2017, arXiv:1711.02395. [Google Scholar] [CrossRef]
  38. Fischer, T.; Krauss, C. Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef]
  39. Betancourt, C.; Chen, W.H. Deep reinforcement learning for portfolio management of markets with a dynamic number of assets. Expert Syst. Appl. 2021, 164, 114002. [Google Scholar] [CrossRef]
  40. Stellato, B.; Banjac, G.; Goulart, P.; Bemporad, A.; Boyd, S. OSQP: An operator splitting solver for quadratic programs. Math. Program. Comput. 2020, 12, 637–672. [Google Scholar] [CrossRef]
  41. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  42. O’Brien, C.M. Statistical Learning with Sparsity: The Lasso and Generalizations. Int. Stat. Rev. 2016, 84, 156–157. [Google Scholar] [CrossRef]
  43. Candès, E.J. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  44. Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
Figure 1. Complete workflow of the SNN-inspired portfolio optimization algorithm. The flowchart illustrates the alternating structure between continuous gradient descent (smooth trajectories) and discrete projection spikes (constraint corrections), event-driven constraint violation detection, and the deterministic post-processing step that recovers discrete portfolios from relaxed solutions.
Figure 1. Complete workflow of the SNN-inspired portfolio optimization algorithm. The flowchart illustrates the alternating structure between continuous gradient descent (smooth trajectories) and discrete projection spikes (constraint corrections), event-driven constraint violation detection, and the deterministic post-processing step that recovers discrete portfolios from relaxed solutions.
Biomimetics 10 00808 g001
Figure 2. Parameter sweep across problem sizes. Each panel compares all ( k 0 , k 1 ) schedules evaluated for that universe, showing time-series traces for objective value, raw constraint violation, and expected daily return together with bar charts for solve time, cumulative projection distance, and post-processed objective/return. Solid lines denote runs that satisfied the convergence tolerance; dashed lines indicate trajectories that terminated after reaching the time horizon while remaining feasible after post-processing.
Figure 2. Parameter sweep across problem sizes. Each panel compares all ( k 0 , k 1 ) schedules evaluated for that universe, showing time-series traces for objective value, raw constraint violation, and expected daily return together with bar charts for solve time, cumulative projection distance, and post-processed objective/return. Solid lines denote runs that satisfied the convergence tolerance; dashed lines indicate trajectories that terminated after reaching the time horizon while remaining feasible after post-processing.
Biomimetics 10 00808 g002
Figure 3. Solve-time versus quality trade-offs. Points are colour-coded by N and use distinct markers for each ( k 0 , k 1 ) pair. Red outlines denote runs that reached the iteration cap; their post-processed solutions retain comparable objective value and return to the converged configurations.
Figure 3. Solve-time versus quality trade-offs. Points are colour-coded by N and use distinct markers for each ( k 0 , k 1 ) pair. Red outlines denote runs that reached the iteration cap; their post-processed solutions retain comparable objective value and return to the converged configurations.
Biomimetics 10 00808 g003
Figure 4. Runtime versus expected return for MIQP, QP + 1 , PSO, and the SNNsolver on the N = 50 , k = 20 universe. The SNNconfiguration sits on the Pareto frontier, trading only a moderate runtime increase for the highest return.
Figure 4. Runtime versus expected return for MIQP, QP + 1 , PSO, and the SNNsolver on the N = 50 , k = 20 universe. The SNNconfiguration sits on the Pareto frontier, trading only a moderate runtime increase for the highest return.
Biomimetics 10 00808 g004
Figure 5. Markowitz objective ranking across the same methods. MIQP and the SNNschedule achieve nearly identical objectives, whereas PSO plateaus at a higher cost despite its quick convergence.
Figure 5. Markowitz objective ranking across the same methods. MIQP and the SNNschedule achieve nearly identical objectives, whereas PSO plateaus at a higher cost despite its quick convergence.
Biomimetics 10 00808 g005
Figure 6. Portfolio overlap heatmap (Jaccard similarity) between the evaluated methods. The heatmap shows moderate overlap (similarity values ranging from 0.14 to 0.29), reflecting that different optimization methods converge to different asset combinations while achieving similar risk-return profiles. This multiplicity of near-optimal solutions is characteristic of portfolio optimization problems, where the efficient frontier contains many portfolios with comparable objectives.
Figure 6. Portfolio overlap heatmap (Jaccard similarity) between the evaluated methods. The heatmap shows moderate overlap (similarity values ranging from 0.14 to 0.29), reflecting that different optimization methods converge to different asset combinations while achieving similar risk-return profiles. This multiplicity of near-optimal solutions is characteristic of portfolio optimization problems, where the efficient frontier contains many portfolios with comparable objectives.
Biomimetics 10 00808 g006
Table 1. Evolution of neural dynamics for constrained optimization.
Table 1. Evolution of neural dynamics for constrained optimization.
YearWorkNetwork TypeProblem ClassKey Contribution
1982Hopfield [9]Analog recurrentEnergy minimizationDemonstrated neural networks perform gradient descent on energy functions through distributed computation
1985Hopfield & Tank [11]Analog recurrentCombinatorial (TSP)Applied neural dynamics to combinatorial optimization; established neuron-optimization connection
2002Xia & Wang [20]Projection networkConstrained convexIntroduced projection neural networks with guaranteed convergence for linear inequality constraints
2004Xia & Wang [21]Projection networkVariational inequalitiesExtended projection networks to monotone variational inequalities with global convergence proofs
2013Boerlin et al. [13]Spiking (LIF)Predictive codingShowed balanced spiking networks implicitly minimize prediction error through spike-based optimization
2016Barrett & Deneve [12]Spiking (LIF)Convex QPProved LIF threshold dynamics implement projected gradient descent; spikes enforce constraints
2020Mancoo et al. [14]Balanced SNNConstrained QPUnified framework: demonstrated balanced SNNs solve quadratic programs with linear constraints
2024Stanojevic et al. [15]Deep SNNDeep learningAchieved 0.3 spikes/neuron efficiency; validated SNN optimization scales to deep architectures
2025This workSNN-inspiredPortfolio QPApplied SNN dynamics to financial optimization; demonstrated neuromorphic deployment pathway
Table 2. Neuromorphic hardware platforms for SNN-based optimization.
Table 2. Neuromorphic hardware platforms for SNN-based optimization.
PlatformOrganizationYearScaleArchitectureEnergy/SpikeKey Features
TrueNorth [19]IBM20141M neurons 256M synapsesDigital event-driven26 pJUltra-low power; fixed architecture; real-time operation
SpiNNaker [18]U. Manchester20131M+ neurons (scalable)ARM multi-core40.5 pJHighly scalable; flexible software; general-purpose simulation
Loihi [16]Intel2018130K neurons/chipAsynchronous mesh23.6 pJOn-chip learning; programmable plasticity; 1000 × GPU efficiency
Loihi 2 [17]Intel20211M neurons/chipEnhanced mesh<20 pJ (est.)Improved throughput; efficient signal processing; production-ready
BrainScaleS-2 [27]U. Heidelberg2020512 neurons/waferAnalog VLSI2.1 pJ 10,000 × real-time; continuous-time analog; accelerated dynamics
Notes: Energy efficiency comparisons based on spike operations. Conventional GPU operations consume 25–100 nJ (INT8/FP32), yielding 1000−5000× energy advantage for neuromorphic platforms. These platforms provide natural deployment targets for SNN-based optimization algorithms through event-driven computation, distributed memory, and massively parallel processing.
Table 3. Summary of SNN solver outcomes across parameter settings. Expected return is reported as a percentage of initial capital and corresponds to the post-processed binary portfolio. Entries marked “No” in the convergence column finished after reaching the pseudo-time horizon.
Table 3. Summary of SNN solver outcomes across parameter settings. Expected return is reported as a percentage of initial capital and corresponds to the post-processed binary portfolio. Entries marked “No” in the convergence column finished after reaching the pseudo-time horizon.
N k 0 k 1 ConvergedSolve Time (s)Final ObjectiveExpected Return (%)Max Raw Violation
50.020.05Yes0.49 1.11 × 10 3 0.1560.28
50.020.1Yes0.50 1.11 × 10 3 0.1560.28
50.050.05Yes0.49 1.16 × 10 3 0.1630.28
50.050.1Yes0.48 1.16 × 10 3 0.1630.28
100.020.05Yes0.98 9.00 × 10 4 0.1170.29
100.020.1Yes0.99 8.83 × 10 4 0.1140.29
100.050.05Yes0.98 9.31 × 10 4 0.1200.29
100.050.1Yes0.98 9.29 × 10 4 0.1200.29
200.010.4Yes2.61 6.05 × 10 4 0.0719.94
200.0350.45Yes2.59 1.53 × 10 3 0.26012.64
200.050.1No0.59 9.41 × 10 4 0.1060.29
200.080.8Yes3.01 1.09 × 10 3 0.13931.14
200.20.5Yes2.71 1.34 × 10 3 0.16814.64
200.21.5Yes2.62 1.15 × 10 3 0.13259.14
500.0070.75Yes8.19 1.53 × 10 3 0.26137.87
500.011.5Yes7.43 1.41 × 10 3 0.22775.45
500.030.9Yes7.47 1.33 × 10 3 0.17345.39
500.050.5Yes8.35 1.37 × 10 3 0.15641.89
500.070.08Yes7.50 1.25 × 10 3 0.1424.39
500.120.02No0.11 1.03 × 10 3 0.1184.39
Table 4. Baseline comparison for the N = 50 , k = 20 scenario.
Table 4. Baseline comparison for the N = 50 , k = 20 scenario.
MethodRuntime (s)ObjectiveReturn (%)Variance
MIQP (ECOS-BB)1.624−0.0018120.2250.000433
QP + 1 0.019−0.0018040.2200.000392
PSO (penalised)0.080−0.0007950.0920.000120
SNN ( k 0 = 0.007 , k 1 = 0.75 )8.185−0.0015290.2610.001078
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, A.H.; Mohammed, A.M.; Li, S. Portfolio Optimization: A Neurodynamic Approach Based on Spiking Neural Networks. Biomimetics 2025, 10, 808. https://doi.org/10.3390/biomimetics10120808

AMA Style

Khan AH, Mohammed AM, Li S. Portfolio Optimization: A Neurodynamic Approach Based on Spiking Neural Networks. Biomimetics. 2025; 10(12):808. https://doi.org/10.3390/biomimetics10120808

Chicago/Turabian Style

Khan, Ameer Hamza, Aquil Mirza Mohammed, and Shuai Li. 2025. "Portfolio Optimization: A Neurodynamic Approach Based on Spiking Neural Networks" Biomimetics 10, no. 12: 808. https://doi.org/10.3390/biomimetics10120808

APA Style

Khan, A. H., Mohammed, A. M., & Li, S. (2025). Portfolio Optimization: A Neurodynamic Approach Based on Spiking Neural Networks. Biomimetics, 10(12), 808. https://doi.org/10.3390/biomimetics10120808

Article Metrics

Back to TopTop