Sometimes when a press release uses "Neural Net" or "AI" they are sweeping a lot under the rug. A feed-forward neural network is a static filter whose coefficients are determined empirically by iterative optimization - referred to as "training" in the popular literature. Mathematically they are a probability density function. Although "neural nets" were popularized decades ago by simple models of synaptic communication, the present-day biological neural theories are well beyond static feed forward models.
Today's advanced systems (e.g., effective autonomous systems) are dynamic. A dynamic neural network is composed of elements which are wired with feedback, so the system has an initial state and a steady state. In continuous time you can think of this as a set of transistors wired with assorted feedback to produce a certain kHz frequency dependent on input. We represent these wirings in either a diagram or a system of differential equations. For discrete input we use step-dependent stochastic difference equations
f[t]=g[f[t-1],h[t-1],...] with f[0]=initial input
Here, feedback is occurring in runtime evaluation of the function -- not just the determination of function coefficients, where a different sort of feedback might be used.