Turbulence: Improving Tokamak. Adding an additional layer of higher RF (mm) Wave or better at all points to create one smooth transition, or multi-collision eddies to increase neutron output. Drawings can be made if needed.

I have a field, controlled by magnets. A ring, within a ring. Causing a spiral of plasma to spin either counter clock wise or clock wise which can be changed upon build specifics. But they use RF to control the disruption of the material. So, we have Mag Field one, inner core, spinning around CCW, or CW, we have Mag Field two spinning within the system around it, to form a casing which spirals as it moves.  The spiral causes turbulence.

If it were a food it would be a doughnut. 
The plasma would be an eternal atomic pastry. The islands that cool the system down to uselessness would be the crust of the foodstuff reacting to the outer limits of the system, unbeknownst to the user. So what do you do to stale dough. You shear it away from the wanted plane as fast as possible before it ruins the rest of the dough.

So what’s to be done with our reactive doughnut? Let it continue on with it’s crust that forces a problem, or to sharpen a series of knives against the target in all points of contention possible and have them run as a continuous subsystem through their RF control modulations to become the disruption of disruption’s disruptor.

1GHZ+-20MG use of microwaves, means it must be of higher depth into the system to hit all points so that no islands may form. Means higher hertz minute width singularly but encompassing the entirety of the system so no islands may form this can be explained down below. 

If using hydrogen to react like the common copper connection ie freely moving electrons, you will get electron slag, where they have bounded off of one another or burnt out completely due to the age of the atom or the space available not being ideal within the plasma to cause some cooling effect. If the chain reaction lasts long enough you get these islands.  

Down below:

If you were to use mm RF Waves (I think I’m heading in the right direction, but if they’re not the smaller particles with a higher hertz move the other way please). 

you could hit all points of the plasma at once or in alternating waves at an angle to cause the internal spiral to lessen, become stagnant, or even reverse in eddies so as to have sectional heat relative to what is wanted but in a much smaller package as it has greater volume of collision points within the toroidal rings over time and the collection points can then be managed as needed down the very electron required as well. Then the neutrons spilling out wouldn’t be just one justified place, which is just a start, but many spoked areas. It also means if an island forms it is likely to form away from the points of collision or perhaps at the points of collision and we would have more information to work with. If it’s at the points of no collision, they’re too far apart and moving too slowly and that’s where you pump your energy back into the system. If it’s at the points of collision, it’s either a fusion or slag that has now been broken up into many smaller slivers that can be blasted through atomically or by a higher RF as stated above and broken down to its plasma heat level that everyone wants. Either way it might help with the design.  

Attempt at drawing out a neural network that could fight covid if all hospitals shared information at one point of time each day synchronized.

Basically that. You’ll have to read the other articles below this one to get the gist of what I was thinking and read this image from the bottom up for whichever reason.

A basic neural net from last night to fight SARS-COvid-02.

I’m unsure whether the main nodes extraneous markers need to be interconnected just for potential analyzation. The darker blue on the right side is the additional effects to be measured per layer one spoke. One spoke is location to next date marker so must continue in it’s own path on the left through the system. Connecting to location in layer three.
Layer two goes in both directions as the disease travel forms can change over time and on a whim.
Layer three is the splintering and conditional symptomatic change effect nodes.
The last node (blue) feeds into the next data set.


Tackling the SARS-COvid-02 pandemic using Machine Learning (ML).



There are at least 9 types of machine learning to currently use within those domains there are many ways to go about this but let’s use a neural net.

Suppose each hospital is a node. With a given value for the end of the day given an entire worldly end point. No time zones. Just a point in time that all converge which all countries doctors agree on. It can be automated at their end or start of their day to go out to the main computing branch so that all the information is included. The nodes are counted. If a node is missing or late they are given a hit and show lack of care or need for shaming or help for overwork and are given a momentary worker to set them up. Or whatever gets the whip cracked to most people.

That’s the outer layer of our nodes.

The second layer is if the number of cases increased per duration, decreased (deaths due to, not withstanding) or remained the same, again deaths not withstanding.

They are weighted. So that if they are increasing in a trend of travel around the globe we can match it to things such as a third layer such as wind or water flow, boat travel, flight travel, or what ever else is still available to commute the disease. 

The next node level is defensive: We now know the shape of the beast and direction so we can heed the warning of an incoming influx of illness from those, any, or all, regions with relative ease and can preemptively prepare those to be afflicted, as it makes it rounds.

It will continue, splintering and reinforcing itself against us while we are unable to stop it completely. I saw the data from China’s first report (no blame) to the April 27, 2020 build and the RNA is a different amount entirely, at 19 strands of the same elongation before being whole down from the initial 20+;  it is becoming more efficient. 

For that build I did my best to find the end Codons that would terminate the elongation naturally, and from there found a dual MM that matched the end Codons to 19 elongations—perhaps a coincidence, though I know that only one M is needed to start a build, two is not unheard of.

Anyway back to the Machine Learning:

Each iteration of splintering would be on the next level of nodes. Weighted by intensity of death, reproduction of disease after recovery, location, time. Each splinter would also have an additional sub node system attached to it that lists the change in symptoms so that you could map and join those together on the next layer.

This layer is simply the next day.
The cycle repeats.

There will at first be dead spots but if using proper technology we can find out any and all iterations and any and all continuations of the disease for any location without the public going into an uproar about their rights, though they give them up to have a computer do the work for them.

It would be a semi-supervised system at first; the data would be hand fed by some nurse/hr person who can feel like they’re doing something extra if they want, or just part of the job if they don’t. They should be lauded though, for going the extra mile. 

There are backdoors into every system on the planet why the hell wouldn’t you use them to diffuse the situation collectively. You already have every citizen tracked. They’re only just realizing it now.


So now let’s look at a practical build: 

Layer One: 164,500 (as of 2015) hospital nodes.
Sub-Layer One Spokes: Weighted for: current location (mobile, or not,) patients seeable per day, incidences increasing, decreasing or stagnating, deaths rising, decreasing, stagnating, Deaths in totality, their report time relative to universal time marker, care lacking, workforce drained or overrun or underwhelmed—then take total order of workers and shuffle them to new location.

As a precaution create a universal language booklet of how to manage current systems. Already done I believe. Or should have been done so workers can be interchangeable within a day or so’s flight.

Sub Layer One Spokes: They are weighted. So that if they are increasing in a trend of physical travel around the globe we can match it to things such as a second layer spoke: such as wind or water flow, boat travel, flight travel, food borne illness, food processing, or what ever else is still available to commute the disease.

Layer Two: Defensive: We now know the shape of the beast and direction so we can heed the warning of an incoming influx of illness from those, any, or all, regions from doctors not politicians as they have proven themselves inept (not all) with relative ease and can preemptively prepare those to be afflicted, as it makes it rounds.

It will continue, splintering and reinforcing itself against us while we are unable to stop it completely. I saw the data from China’s first report (no blame) to the April 27, 2020 build and the RNA is a different amount entirely, at 19 strands of the same elongation before being whole down from the initial 20+;  it is becoming more efficient. 

For that build I did my best to find the end Codons that would terminate the elongation naturally, and from there found a dual MM that matched the end Codons to 19 elongations—perhaps a coincidence, as I know that only one M is needed to start a build, two is not unheard of. Included below:
Probably nothing.

sars-covid-02-basic-genetic-information-i-found-today.


Anyway back to the Machine Learning:

Layer Three: Splintering: Each iteration of splintering would be on the next level of nodes. Weighted by intensity of death, reproduction of disease after recovery, location, time. Each splinter would also have an additional sub node system attached to it that lists the change in symptoms so that you could map and join those together on the next layer and previous layers of location to map their movements and trends as well.

This layer is simply the signal to repeat.
The cycle repeats.

Artificial Atoms: As they relate to Quibits.

So far they’re using a single electron artificial atoms in quantum wells to become quibits. 

From what I’ve found in my lazy search they seem to think that quitrits and quadtrits are the maximum number of artificial trits that can be made since 2013—but that is incomplete.

The standard deviation of an atom is a nucleus wrapped around with electrons who travel in patterns around the nucleus in a wave/formed pattern. 


Thought experiment to get to the next point:

If you let a single atom float in a vacuum full of super fluid (assuming no possible bonds able) denying gravity’s hold on the density and mass of the atom, and the superfluid pushing against all parts of the atom at once dependent on the superfluids movement. Other than if we’re lucky enough to have stabilized superfluid after some time—inherent vibration withstanding. Would we be able to find the sole atoms exact electron travel within the confines. If so could we then release that atom in this fluid so that is it constantly “falling or rising” dependent on the superfluids movement around it. Depends on the containers shape. A single tube gives up. A s bend on it’s end middle gives down if from the proper side.  

So that we can then “open” an area where the electrons are centralized within a halo around the “top” or “bottom” of the sole atom. That would open up many stages to insert wavelengths  from within the containers walls or outside it if properly managed other than radio at once and change the planes that would be interacted to be actable more than singularly at once, so that you could actually hit one electron, infer it’s superimposed cousin from glimmering from the initial hit not the same as the other electrons and then the general direction of the superposition atoms direction outside the container. This may have to be done in a completely dark room. Dual vacuumed enclosure/dual superfluid to allow clarity. And if so would it be possible to set this example up twice, in either the same state or two opposing states so that you get that glimmer and can start to literally determine superpositions distance/locations.

At the same time we could do a different function where we hit multiple electrons at once causing them to pulse in ways we want—up, down, side to side, diagonal and since the electrons of the sole atoms are compressed between the superfluids electrons, if we time them, we could bounce from one atom to the other and back again, bending the super position—though technically not that—a new form of some kind (atomic J Hook?), around the same atom either the other side of the same electron or a cousin electron within the halo. From there we take all possible iterations of those atoms and wavelength iterations and we can build a table/dataset of superposition distances or at the least their angles. Knowing that by raising the superfluid temperature using light would possibly change it’s state we would have to start small with the lowest coolest lights possible. Or diffusion through a material to slow it down to it’s coolest speed though through a final lens to hit it’s target. Read speed isn’t important at first, it’s just the fact that we can figure out the change states in real time. 

Hot Quibits:


There is such a thing as multi layered super positioning when using light’s wavelengths at the medium of super positioning.

You can cut the points of contact around the nucleus so that they create multi cast shadows where they are cross referenced. You can still read the bits themselves, but also the references as other operations. Or at least as another set of information, be it topographical or depth-wise. 

Eventually you’ll be getting “hot” chipsets that can handle the cooler frequencies in tandem when pulsed with slightly higher frequencies, and when they mix you’ll get another set. Interspersing them between the atoms means you’ll get an array of data from one series of quantum pulses, but they can be interconnected in such a way as to be read from any side as warranted as long as you want to read those cross references. You can then focus those wave lengths into something else using prisms or whatever is new nowadays in lens technologies.

You can also build materials to absorb the cross references and hold them as heat.

There is a possibility to create cool and hot well combination boards that let you do both functionalities at once in one pass and as they become warmer and the cool becomes raised you flip the tech and cool the hot quibits to be cool and then you have a on/off action as well as a cool warm cycle needed to disperse heating massive racks of of quantum chipsets built in. Such as they do now with quantum dots. Though I know those range in size instead, these would range in temperature or frequency reception until they hit a critical array and then an interpreter function would change it to the opposite or reproduce the cooling function in the same spot in waves so you don’t lose information and can “Store” it as you would—so quantum Dram. But light is the way to go here.

To create Quantum Sram You would need to spin the materials down to a certain cool point then keep them at a temperature considered stable and that would keep the spin if not adjusted static creating a range of Sram. Perhaps you would need a certain atom type that spins slowly against a certain lights frequency constantly, wobbles very little to not at all.

You know what if you capture the atom with a carbon shell as I described this morning and then pulsed it within the solution so that it span slowly as it was at it’s coolest and it was in a vacuum so that it couldn’t run up the side of the wall, and only allowed entry of the lights through  the carbide rings insulating the tetrahedral diamonds through focussing lenses you could get very “slow” read speeds, meaning they would have time to spin down to “static” and then you could repeat and that would also be your storage read speed.