 # State Space Algorithm

Recently, I im­ple­mented the state space al­go­rithm in Racket. It's a sim­ple al­go­rithm that can ﬁnd a goal state in n-di­men­sional Euclidean space. All one needs is a com­putable func­tion, a start­ing do­main, and a goal value.

## How does it work?

For ex­am­ple, let's say we want to ﬁnd an in­put for co­sine that gives us one of the ze­ros for the do­main 0 to 2π. Of course, there's no need to use the al­go­rithm in this case. We could look at a unit cir­cle and ﬁnd that ap­ply­ing co­sine with the value 3π/2 gives us zero. Alternatively, the value of π/2 works just as well. Shortly, we'll dis­cover how the al­go­rithm will non­de­ter­min­is­ti­cally re­turn one of these two ze­ros–and in this case, we don't fa­vor ei­ther state. Any zero will do.

The main idea of the al­go­rithm is to tighten the do­main around a state that sat­is­ﬁes the goal. This process is done it­er­a­tively. Every crank, the code will gen­er­ate K ran­dom guesses for a goal-sat­is­fy­ing state within the cur­rent do­main. The K pa­ra­me­ter has to be any pos­i­tive in­te­ger, but you get to choose it. I rec­om­mend try­ing dif­fer­ent val­ues for K to see how that af­fects the out­come.

The best state out of the set of K–i.e. the guess that gives an out­put clos­est to the goal–will be set aside. This guess is se­lected by ﬁnd­ing the K range value that min­i­mizes the for­mula be­low. Note that y is the guess and yg is the goal value.

Obviously, in the be­gin­ning, the best state is likely far off from the goal state. The chance to gen­er­ate a sat­is­ﬁ­able guess in­creases as the do­main shrinks around a goal-sat­is­fy­ing state, which hap­pens every it­er­a­tion.

To get this new do­main, there are a few steps to fol­low. First, let's ﬁnd the av­er­age of the ab­solute val­ues of the dif­fer­ences of the K range val­ues and the goal value, which we'll call u.

Next, we'll need a so-called ep­silon value to scale our new do­main. To com­pute it, use the fol­low­ing for­mula:

The nu­mer­a­tor is the ab­solute value of the dif­fer­ence of the best K range value and the goal–which we com­puted ear­lier. The de­nom­i­na­tor is the u we just com­puted. So, you should only be plug­ging in num­bers here.

Finally, in­sert our pre­req­ui­site work into this for­mula to gen­er­ate the new do­main. Note that the x is the best state out of the set of K states. The b is the end value of the cur­rent do­main; the a is the be­ginnning of the do­main.

The al­go­rithm will thus gen­er­ate K range val­ues and use the best one to cre­ate the new do­main for the next it­er­a­tion. We'll con­tinue this process un­til one of the gen­er­ated guesses is within an er­ror bound that we choose.

## Conclusion

Astute read­ers may have no­ticed that this is a vari­a­tion of a beam search. According to Wikipedia, beam search is a heuris­tic search al­go­rithm that ex­plores a graph by ex­pand­ing the most promis­ing node in a lim­ited set. In this case, our graph” is the do­main. Thus, we are work­ing over a con­tin­u­ous space rather than a dis­crete space; how­ever, since we can ef­fec­tively re­duce the search space, the al­go­rithm works. Well, works is fudgey term. I be­lieve that the al­go­rithm can some­times fail to ﬁnd a so­lu­tion since it isn't math­e­mat­i­cally guar­enteed to con­verge the do­main around a goal-sas­ti­fy­ing state. Increasing the K pa­ra­me­ter should help–more guesses in­creases the chance of one them be­ing good.

Here is the code for my com­plete so­lu­tion. As a re­minder, the al­go­rithm is look­ing for a zero of the co­sine func­tion for the do­main 0 to 2π. To note, K is set to 3, and the er­ror bound is set to .01.

#lang racket(require math/distributions)(struct domain (x y))(define (state-space fn goal [domain (domain 0 (* 2 pi))] [k 3] [error-bound .01])  (let loop ([x 0] [min (domain-x domain)] [max (domain-y domain)])    (cond      [(within-bound? (fn x) goal error-bound) x]      [else       (define k-values (gen-k-values k min max))       (define best-value (select-best-value k-values fn goal))       (define-values (new-min new-max) (gen-new-bound k-values best-value goal min max))       (displayln (format "~a < ~a < ~a" new-min best-value new-max))       (loop best-value new-min new-max)])))(define (gen-k-values k min max)  (define dist (uniform-dist min max))  (for/list ([ii k])   (sample dist)))(define (select-best-value k-values fn goal)  (argmin (lambda (x) (abs (- goal (fn x)))) k-values))(define (gen-new-bound k-values best-value goal min max)  (define errors (map (lambda (x) (abs (- goal x))) k-values))  (define avg-error (average errors))  (define e (/ (abs (- best-value goal)) avg-error))  (define term (* e (/ (- max min) 2)))  (values (- best-value term) (+ best-value term)))(define (within-bound? y goal error-bound)  (define diff (- goal y))  (<= (abs diff) error-bound))(define (average xs (return /))  (if (empty? xs)      (return 0 0)      (average (cdr xs)               (lambda (sum len)                 (return (+ sum (car xs))                         (+ len 1))))))(module+ main  (state-space cos 0))

For a sin­gle run, I got a state value of about 1.566, which is close to π/2. Since π/2 is one of the valid ze­ros, the al­go­rithm's out­come is what we ex­pected. You should keep in mind that it's pos­si­ble to get a state value near 3π/2, the other zero. Consider run­ning the al­go­rithm mul­ti­ple times to get a feel for how things work.