**FAST FERMAT INTEGER FAMILY FACTORING**

**Part Five: Basics of Integer Factoring**

*Sometimes we are not what we want to be,*

Many times we are not what we should be, and

Most times we are not what we could to be.

**Foreword**

I finally have a decent algorithm for speeding up **Fermat Integer Factoring**, but I can not tell at this time whether it is a game changer or merely an incremental improvement. Perhaps if I had a fast computer with lots of array storage, I might be able to determine that. Alas, that is not likely to be the case anytime in the foreseeable future. I do have an idea for fine tuning the algorithm and perhaps that might shed a little light on what potential there might be for improvement. As such, I will post if that is the case. Until then, please enjoy my little offering.

Special thanks to D Howe for showing interest and support in my madness.

This article is part of the series called the Basics of Integer Factoring. If you view my About data, a list of all the other articles can be seen.

**Chapter 1 — Background**

The basics of **Integer Factoring Families** are discussed in the following articles:

and

and

At this time, I will not be discussing the mathematical principles involved as they are more than fully presented in these articles along with a JavaScript source code for examination.

**Chapter 2 — The Algorithm**

Knowing how the math works in this particular instance really does not give a clue on how to implement the algorithm. Think of a train on a railroad network. We have to balance the speed we can travel against the cargo (array size) we can carry. I went through a dozen different pathways before I had something with a reasonable balance of speed and array storage size and it will not surprise me if I learn that there is a better way.

Our test subject for this essay will be *N=4,742,594,831,699=9901×479,001,599*. Since we are ultimately using a Fermat solution, our final verification is always based on *N = An² − Bn²*. However, instead of stopping at every depot, we are attempting to create an express train. If we run a pure **Fermat Factoring** routine, we will stop at every depot and we will have *237,328,001* stops. **Enhanced Fermat** tells us that we can skip *6* or more trials between stops because of the way **Integer Families** are arranged. We only stop *19,777,336* times. That saves *217,550,665* stops. We are faster in the process because we no longer lose time at each stop, but we introduce an overhead factor and a pathway factor. The overhead is our cargo. Before a train leaves on its trip it must be loaded. In this case that requires running the calculations to determine how many stops we can skip. We also sadly discover that we must make the trip twice! Imagine that the two routes run parallel to each other, but are connected by crossover routes. Pure Fermat simply runs along Path A then crosses over to Path B and then back again in a zigzag pattern. To keep **Enhanced Fermat** in express mode, we must take each path separately.

When we consider **Fast Fermat**, we greatly increase the overhead. All the calculations of **Enhanced Fermat** must be repeated plus we must juggle the **Exclusion Factors**. (More on that in the next chapter.) Using the **Exclusion Factors**,** **our route is now irregular. Sometimes we travel at speeds near the **Enhanced Fermat** and then we travel at faster speeds that are ten or more times faster. In addition, we are still required to take two routes and we must unpack the **Exclusion Factors **each time we stop. Thankfully (after a dozen tries), some progress was made and we only stop *1,590,810* times now. That saves another *18,186,526* stops.

**Chapter 3 — The Exclusion Factors**

The Exclusion Factors originate from the properties of Integer Families and provide information about which values on *(An)* can be skipped. These work by comparing modulus from known drop patterns against normalized drops derived from the value of *(N)*. If they do not match they are excluded.

The **Exclusion Keys** can be any integer, however, some work better than others. Using the values of *2, 3,* or multiples of* *these may not produce any exclusions at all, e.g. the *“X”* in the chart.

Other than the value *2,* the other prime numbers generate sufficient exclusions. These keys can also be combined by multiplying two keys together because the modulus of the product generally has similar properties to the original keys combined.

The values that I work with are multiplied by the binary bits for 31 and produce the following table:

`TheFFKeyLength=33; //-- 3×11 with 31 Binary Bits=1023`

TheFFKeyLength=105; //-- 3×5×7 with 31 Binary Bits=3255

TheFFKeyLength=1155; //-- 3×5×7×11 with 31 Binary Bits=35805

TheFFKeyLength=15015; //-- 3×5×7×11×13 with 31 Binary Bits=465465

TheFFKeyLength=255255; //-- 3×5×7×11×13×17 with 31 Binary

Bits=7912905

The **Binary Bits** take advantage of a quirk of JavaScript. JavaScript allows construction of 32 bit binary numbers. Creating an **Exclusion Key **of 31 (binary) increases the size of the **Key Loop** with out dramatically increasing the array size. (Do this as a last resort. If you have array size to spare, use it!)

The size of the **Exclusion Key** determines how fast the routine will run, but also increases the overhead time. When using a browser to run the JavaScript program, there may be a limit to the usefulness of the key size because the browser will detect a “run away” program and take steps to shut it down. Note on the positive side that the increments required for factoring are indeed smaller, which indicates the math is working properly. This problem can be programmed around if a person (other than me) has the inclination to do so, but there will still be significant overhead to deal with in any case.

The smaller keys can be built in a few seconds. The larger keys may take ten or more minutes to build. I have not experimented with the idea, but perhaps two or three loops could be built and run simultaneously. For the time being, the key must be adjusted to match the size of (*N) *that is under investigation. Values of (*N)* less than *1000 *can probably solve quickly using the key *33×31. *Increasing the size of (*N)* by a *1000 *times might require a key of *105×31. *This will continue, until ultimately, a limit of the array size available will slow the process back to horses and wagons. I am curious how far that will go though.

**Chapter 4 — The Key Loop**

After building the **Key Array Loop**, we compress it and remove all the *“X”* values. The **Key Loop** now only contains offset values for *(An).* The size of the **Key Loop** becomes the original **Base**. After each **Key Loop** is complete, the **Base** is incremented by the size of the **Key Loop**. Thus each *An = Base × Loops Complete + Index Offset.* A nice aspect of this is that multiple computers can tackle the problem with each assigned a discrete base range at the start. With that in mind, the actual speed of the factoring routine might be irrelevant.

**Chapter 5 — Final Notes**

I have put all this together to make a point. I believe our understanding of integer factoring is primitive. There are mathematical relationships that we have completely over looked and I hope this series of essays is helpful to someone.

**Appendix 1 — Standard Definitions and Equations**

- …
*^…*— Raise to the Power of - √
*[…]*— Square Root Function *An*— Factor of Base Root Value:*An ≥ √[N]**N’*— Integer Prime Number:*N’ = N’×1 ≠ N**N*— Composite Integer Factorization:*N = Xn×Yn ≠ N’**N*— Fermat’s Factorization:*N = An² − Bn² = (An − Bn)(An + Bn)**N*— Series Factorization:*N = Si + 6 Sn = (Sa+ 6Sx)(Sb+ 6Sy)**Si*— Series Identifier for Composite*N:**Si = 5*or*7**Sa*— Series Identifier for*Xn:**Sa = 5*or*7**Sb*— Series Identifier for*Yn:**Sb = 5*or*7**Xn*— Smaller Integer Factor of*N: Xn = An − Bn*where*1 < Xn ≤ Yn**Yn*— Larger Integer Factor of*N: Yn = An + Bn*where*Yn ≥ Xn > 1*