Speakers        Keynote Speakers


Invited Speakers



Dr. Na Liu, Tianjin Normal University, China

 

Liu Na received her PhD from the School of Mathematics, Harbin Institute of Technology in 2021. From September 2021 to September 2023, she worked as a postdoctoral researcher in the Department of Automation at Tsinghua University. She is currently employed at the School of Mathematical Sciences, Tianjin Normal University. Her research focuses on neurodynamic optimization theory and multi-agent distributed optimization. She leads a sub-topic of a national key R&D program and a joint fund project of the Tianjin Natural Science Foundation. As a core member of the research team, she has participated in topics of national key R&D programs, as well as multiple projects of the National Natural Science Foundation and open fund projects of national key laboratories. To date, she has published more than 10 high-level academic papers in international journals such as IEEE Transactions on Cybernetics (IEEE TCYB), IEEE Transactions on Automatic Control (IEEE TAC), Neural Networks, and Neurocomputing.

 

Speech Title: Neural Dynamics Optimization Algorithm for Nonsmooth Nonconvex Optimization Problems

Speech Abstract: Given that many real-world complex systems and models inherently possess nonsmooth and nonconvex characteristics, nonsmooth nonconvex optimization problems hold significant importance in engineering practice. However, the nonsmoothness and nonconvexity of the objective function and constraint functions pose substantial challenges to the design and convergence analysis of optimization algorithms. To address such optimization difficulties, this paper proposes a novel smooth gradient approximation neural network. In this network, we innovatively introduce a smoothing approximation technique with time-varying control parameters, aiming to effectively handle nonsmooth and nonregular objective functions. Furthermore, to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint set, a hard comparator function is introduced, and it is further proven that any accumulation point of the constructed neural network's state solution is a stationary point of the studied nonconvex optimization problem. Notably, this neural network also demonstrates strong capability in finding optimal solutions when solving certain generalized convex optimization problems. Compared with existing related neural networks, the neural network proposed in this paper has more relaxed convergence conditions and a simpler algorithm structure. Through simulation experiments and practical applications in terms of optimization condition numbers, the practicality and effectiveness of the proposed algorithm are fully verified.