These classes represent polynomials. At present, only the first order versions are used, but consider that they could be extended to any order.

When evaluating a function

The “CPOLY” classes represent the result in a traditional polynomial form. Consider a series of terms, c0, c1, c2, ... These represent the coefficients of a Taylor series of the function expanded about 0. (Maclauran series). Thus

The other “FPOLY” classes represent the same polynomial as a Taylor series expanded about a point “x”. Again, consider a series of terms, f0, f1, f2, ... This time the terms represent the function evaluated at x and its derivatives. Therefore, f0 is

Both of these are equivalent in the sense that they represent the same data, and there are functions (constructors) that convert between them. The “FPOLY” form is usually most convenient for function evaluation used in behavioral modeling and device modeling. The “CPOLY” form is most suitable for stamping into the admittance matrix and current vector for the final solution.

before doing anything ....

expand() precalc()on issuing the “tran” command ..

tr_begin() // get ready for (each time step) { tr_advance() // precalculate and propagate for (each iteration) { tr_queue_eval() // decide which models need evaluating do_tr() // evaluate models tr_load() // build the matrix of equations solve the resulting system of equations } if (converged) { tr_review() // how are we doing? suggest time step } if (no problems) { tr_accept() // postcalculate and accept data } }The functions referred to above are actually loops that call that function for all devices in the circuit.

For all of them, it is possible that they may not be called. If there is evidence that the result will not change from the last time it was called, it probably will not be called. Since this algorithm is not perfect, it is possible that any particular function may be called twice, so they are written so calling more than once is equivalent to calling once.

Most simple elements do not have expand functions. Most advanced components do.

Expanding a subcircuit makes a copy of it, and remaps the nodes. Most components use a shallow copy. That is, if something is attached through a pointer, the value of the pointer is copied, not the attachment. Commons are never copied when the owner components are copied.

It is ok to expand a component more than once. Either it frees then re-expands, or it keeps what it can and checks to make sure it is correct.

The actual evaluation of constant linear elements is done here. For nonlinear elements, it computes a first guess.

The purpose is to make sure that the initial guesses and numbers for prior iterations that don't exist are properly set up. For linear elements, the values are set up here and are not computed later.

For basic storage elements like capacitors, they store the data from the previous step. They may also attempt to predict a new value, in hopes of speeding up the real solution.

For delay elements like logic devices and transmission lines, this function does the real work. It takes previous results and applies them, generating data that will be later loaded into the matrix.

Usually, it calculates the function and derivative. It may also do integration, interpolation, iteration, or whatever is required. The result is a set of values ready to stamp into the admittance matrix and current vector.

There are several distinct steps within this function.

- The first step is to gather the information necessary to make
the computations. Usually, this is the node voltages, but it could be
currents, temperature, charge, or something else.

- The next step is to evaluate any attached function. This could
be done in line, or by a call to
*tr_eval*. The result of this evaluation is stored in*_y0*(of type*FPOLY1*. The*tr_eval*function reads the value of*x*from*_y0*, and fills in the*f0*with the result of function evaluation, and*f1*with its derivative. The*tr_eval*function must also check for convergence by comparing the new*_y0*with the old value,*_y1*. This attached function is generic in the sense that it is the same for all device types. This is the*y*=*f*(*x*) that is referred to in the behavioral modeling documentation.

- These values are stored for convergence checking and probing.

- After that, it must be converted to a current and admittance so
it can be used in the system of nodal equations. This step is
dependent on what type of device it is. For a conductance element,
*tr_eval*directly returns the correct information, so nothing needs to be done here. For a capacitor, this step does numerical integration. Capacitors store this in*_i0*. Most other elements do not store this result directly.

- Then, it must be converted into
*CPOLY*form to meet the requirements of the system of equations.

- The device is queued for loading. Unlike Spice, Gnucap does not actually load the matrix here.

Actually, it does much more. In most cases, it actually loads a correction factor, assuming the old values are already loaded. To do this, it keeps track of what values are actually loaded. Whether it loads a correction or the actual value is determined first by the option

In addition, it may apply damping in hopes of improving convergence. This means to load a value somewhere between the new and old values, in effect taking a partial step. The decision to damp is semi-global. Groups of elements are adjusted together.

The actual loading is done by one or more of a small group of general functions, depending on whether the element is active, passive, poly, or a source. Only certain patterns can be stamped. Complex devices use a combination of these patterns.

WARNING to model developers: DO NOT stamp the matrix directly!

The function TRANSIENT::review sets two variables: “approxtime” and “control”.

The variable “approxtime” is a suggestion of what the next time should be. Note that this is a time, not a difference. Also note that the simulator may override this suggestion. Another “control” is an enum that shows how the time was selected. You can probe control(0) to find this code, or control(1) to see how many steps (not iterations) it calculated internally.

This time may be in the future, past, or again at the present time, depending on conditions. A time in the future means all is well, and the simulation can proceed as expected. A time in the past indicates something is wrong, such as convergence failure, excessive truncation error, or a missed event. In this case, the step is rejected, and time backed up. A repeat at the present time usually means a latency check failed. A portion of the circuit that was thought to be latent was found to be active. This usually indicates a model problem.

First, it attempts to suggest a time “rtime” based on iteration count and options.

There are several “options” that control the stepping:

- iterations > itl4 ... reduce by option "trstepshrink".

- iterations > itl3 ... suggest the same step as last time.

- else (iterations <= itl3) ... increase step size. Try the max as per userstepsize/skip limit to larger of (rdt*trstepgrow) where “rdt” is the old “review” estimate or (oldstep*trstepgrow) where oldstep is what was actually used last time and trstepgrow is an option, from the options command.

Then, the earliest time of the above two methods is selected. A time in the past means to reject the most recent time step and back up, but note that this time is only a suggestion that may not be used.

The function “TRANSIENT::sweep” essentially processes the loop “for (first(); notpastend; next())”. The function “TRANSIENT::next()” actually advances (hopefully) to the next step. It may go backwards.

The actual time step depends on the suggestion by the review function (approxtime), the event queue (which includes what Spice calls “breakpoints”), the user step size (nexttick), and some tricks to minimize changes.

Some considerations ...

- Changing the step size is an expensive operation, because it
usually forces a full LU decomposition and matrix reload. If the step
can be kept constant, changes are limited to the right-side,
eliminating the need for the full evaluation and LU.

- The simulator will place a time step exactly at any step for which the user has requested output, or Fourier analysis needs a point, or at any event from the event queue.

Assume we want it at the time the user requested. If the event queue says to do it sooner, take it, else take the user time. Note that this time is needed exactly, either now or later. If the “approxtime” is sooner than the exact time, inject a time step as follows... if the time step is less than half of the time to when an exact time is needed, take the approxtime, else take half of the exact interval, in hopes that the next step will use up the other half.

After that, there are some checks ....

“Very backward time step” means that the suggested new step is earlier than the PREVIOUS step, meaning that both the current step and its predecessor are rejected, thus it should back up two steps. Since it can back up only one step, it rejects the most recent step and tries again at the minimum step size. This usually means there is a bug in the software.

“Backwards time step” means to reject the most recent step, but the one before that is ok. It will reject the step and try again at the smaller interval. This happens fairly often, usually due to slow convergence.

“Zero time step” means that the new time is the same as the previous time, which usually means there is a bug in the software. Something is requesting a re-evaluation at the same time.

The combination of “zero time step” and “very backward time step” means that the re-evaluation didn't work.

Now, accept the new time and proceed.

For capacitors and inductors, the review function attempts to estimate truncation error using a divided difference method, and it suggests a time for the next solution that will result in meeting the error requirement. Occasionally, it will discover that the step just computed fails to meet the requirement, so it will reject it.

Truncation error is related to the third derivative of charge or flux. Since current is the first derivative of charge, it would seem that second derivative of current should produce the same results faster. In practice, the current based method tends to estimate high leading to smaller steps, and the charge based method tends to estimate low, leading to larger steps. The conservative approach would suggest using the current based method, but that sometimes led to unreasonably small steps ans slow simulations, so I chose (as Spice did) the other method. Either method is ok when the step size used is close to being reasonable, but when the trial step is unreasonably large, either approach gives a very poor estimate. Taking a step much too small will make the simulator run much slower, as it takes many steps, then the step size is allowed to grow slowly. This is slower both because of the many unnecessary steps, and because of many adjustments. Taking a step that is much too large will result in a choice that is better than the first trial, which will make a better estimate and be rejected. It is rare to get more than one rejection based on truncation error.