[{"content":"Imagining a little demo In a previous post I wrote about Homography and how manipulating images to create an arbitrary projection onto a screen (or camera sensor) from a 3-dimensional space works. That was a nice little piece of math if you ask me, but I wanted to show a demonstration that\u0026rsquo;s a bit more exciting and figured video would be even cooler than a single image.\nThat led me to imagining a video feed that\u0026rsquo;s distorted in some way, and if a single feed is nice, why not have a few of them in some cyberpunky type old, but at the same time futuristic, kind of way. Like a derelict video store from a 1980s Bladerunner existence. Yes, I\u0026rsquo;m on a few days off work and my mind wanders.\nIn any case\u0026hellip; Thinking about how setting up something like that I immediately started to think about how computationally intensive this may be. Will Python suffice? Will I have to learn how to write a GPU shader? I want to do that, but if the task is going to be hard and involved, I won\u0026rsquo;t have time to get into that right now. Do I use Rust again?\nCSS to the rescue Well, it turns out you don\u0026rsquo;t have to do anything hard at all. All the hard work as already done for us, in CSS of all places. CSS is built to manipulate the appearance of elements on web pages. It does that incredibly well and is optimized in all imaginable ways because of-course it is. If you look at the docs, there\u0026rsquo;s a transformation method called matrix3d() which defines a 3D transformation as a 4x4 homogeneous matrix. If you need a refresher on what that is, you can read the previous post. But it\u0026rsquo;s exactly what we need to take an element and project it Homographically! The browser than does this in the most optimal way imaginable, using the GPU for rendering. To me, this is mind blowing. No Rust, no low level, no thinking about how to make something work in the GPU. It even works on your cellphone and as a developer I get this for free! Don\u0026rsquo;t know how exciting this is to you but to me it made it click that modern browsers are an amazing technology.\nThe demo You can check out the demo below. It features three image layers that are offset from each other. A window pane, a table with some screens and a shop background that\u0026rsquo;s a bit hard to see. If you give the browser permissions to use your camera then you see the video feed transformed live. The three layers offset slightly from each other as you move the cursor to get a parallax effect, and if you are running this on your phone, it reads the accelerometer to get the parallax going. Again, the way to implement this effect is CSS with just the tiniest bit of javascript to pull data from the camera and sensor. Browsers FTW.\n","permalink":"https://blog.winer.co.il/posts/cyberpunk-demo/","summary":"\u003ch2 id=\"imagining-a-little-demo\"\u003eImagining a little demo\u003c/h2\u003e\n\u003cp\u003eIn a \u003ca href=\"/posts/homography-notebook/\"\u003eprevious post\u003c/a\u003e I wrote about Homography and how manipulating images to create an arbitrary projection onto a screen (or camera sensor) from a 3-dimensional space works. That was a nice little piece of math if you ask me, but I wanted to show a demonstration that\u0026rsquo;s a bit more exciting and figured video would be even cooler than a single image.\u003c/p\u003e\n\u003cp\u003eThat led me to imagining a video feed that\u0026rsquo;s distorted in some way, and if a single feed is nice, why not have a few of them in some cyberpunky type old, but at the same time futuristic, kind of way. Like a derelict video store from a 1980s Bladerunner existence. Yes, I\u0026rsquo;m on a few days off work and my mind wanders.\u003c/p\u003e","title":"Short take on the awesome power of web-browsers"},{"content":" ","permalink":"https://blog.winer.co.il/posts/homography-notebook/","summary":"\u003ciframe\n  src=\"/homography/index.html\"\n  width=\"100%\"\n  height=\"800\"\n  style=\"border:none;\"\n\u003e\u003c/iframe\u003e","title":"What is Homography?"},{"content":"Making a photorealistic image in code Programming a computer to make a photorealistic image is a very cool party trick. Thinking about the problem from first principles, it\u0026rsquo;s not clear what you need to get this thing working. I mean, there\u0026rsquo;s a clue in the name - it\u0026rsquo;s tracing rays. The gives us a clear indication that we will be following light rays along in a 3d scene, sure. But what was not clear to me is how much is needed, how much detail, when modelling the light-matter interaction to get a result that looks good. It turns out that the answer is: surprisingly little. Here\u0026rsquo;s what I made by following an amazing free three-book series by Peter Shirley, Trevor David Black and Steve Hollasch: Ray Tracing in One Weekend.\nNot only is ray tracing a fun project because the end results are beautiful, even at minimal effort. It\u0026rsquo;s a great project because it lends itself perfectly to the reward loop where you write-compile-view-iterate. It\u0026rsquo;s perfect for nice little seratonin boosts, and because the render time can be non-negligible you even get a bit of down time in betweem renders if you want it.\nThe Ray tracing in one weekend series Ray tracing in one weekend (RTOW from here on) does an amazing job of showing you exactly how to implement a ray tracer in C++. You don\u0026rsquo;t even need to know C++ to any serious level if you want to follow it. The phylosophy is consistently such that it\u0026rsquo;s the bare minimum needed, but not less than that. It\u0026rsquo;s very hard to pull something like that off and the series is amazing for doing it perfectly. And it\u0026rsquo;s free!\nAt work, I\u0026rsquo;ve been using ray tracers in various forms for a while now. There are a bunch of famous serious implementations like pbrt or Pixar\u0026rsquo;s RenderMan and when working with Blender (which I\u0026rsquo;ve loved since my PhD days) there\u0026rsquo;s Cycles but I never put any serious thought into how this thing works. Having gone thorough a basic example, I have a much better understanding into some of the knobs I mindlessly twisted in Blender to change the visual quality of outcome.\nIn RTOW you build the ray-tracer math from the ground up. In the book you implelment the entire stack needed to describe vector math and some linear algebra operations (because of course it\u0026rsquo;s linear algebra. EVERYTHING is.). I opted to implement things in Rust. Both because it made it a bit more of a challenge (compared to copying the code snippets) and because I wanted to see if I could port things over to WASM once again. I did.\nHow does basic ray tracing work? The fundamental problem of 3D graphics is projecting a three-dimensional world onto a two-dimensional grid of pixels. In \u0026ldquo;rasterization\u0026rdquo; (the technique used by almost all real-time video games), we take triangles and project them onto the screen. It’s incredibly fast, but it struggles with things that light does naturally: soft shadows, reflections, and complex refraction.\nRay tracing flips the script. Instead of projecting triangles to the screen, we shoot \u0026ldquo;rays\u0026rdquo; from the eye (or camera) through each pixel and into the scene. We ask: \u0026ldquo;What does this ray hit?\u0026rdquo;\nThe Geometry of a Ray A ray is essentially a mathematical function of a 1D parameter $t$. If we have an origin point $\\vec{A}$ and a direction vector $\\vec{b}$, any point $\\vec{P}$ along that ray can be described as:\n$$ \\vec{P}(t) = \\vec{A} + t\\vec{b} $$As we vary $t$, the point $\\vec{P}(t)$ moves along the line. If $t \u0026gt; 0$, the point is in front of the camera; if $t \u0026lt; 0$, it\u0026rsquo;s behind us. Our task is to find the smallest positive $t$ where this ray intersects an object in our scene.\nAnything you want to render, as long as it\u0026rsquo;s a sphere So we need to solve for t, so we need a concrete geometry. We start with only spheres. To render a sphere, we simply need to find the intersection of this ray and the equation for a sphere. If a sphere has center $\\vec{C}$ and radius $r$, then any point $\\vec{P}$ on the surface of the sphere satisfies the following equation:\n$$ (\\vec{P} - \\vec{C}) \\cdot (\\vec{P} - \\vec{C}) = r^2 $$This says that the square of the distance from the center to any point on the surface is equal to the radius squared. By substituting our ray equation $\\vec{P}(t) = \\vec{A} + t\\vec{b}$ into the sphere equation, we get a quadratic equation in terms of $t$:\n$$ (\\vec{A} + t\\vec{b} - \\vec{C}) \\cdot (\\vec{A} + t\\vec{b} - \\vec{C}) = r^2 $$Expanding this out, we get:\n$$ t^2(\\vec{b} \\cdot \\vec{b}) + 2t(\\vec{b} \\cdot (\\vec{A} - \\vec{C})) + (\\vec{A} - \\vec{C}) \\cdot (\\vec{A} - \\vec{C}) - r^2 = 0 $$So we need to solve a quadratic. If the discriminant ($b^2 - 4ac$) is positive, the ray hits the sphere in two places (entering and exiting). If it\u0026rsquo;s zero, it grazes the edge. If it\u0026rsquo;s negative, we missed entirely. We pick the smallest positive $t$, calculate the surface normal at that point (the vector pointing straight out from the center), and use it to determine the color.\nScattering and reflections What makes ray tracing \u0026ldquo;photorealistic\u0026rdquo; is how it handles materials. When a ray hits a surface, it doesn\u0026rsquo;t just stop and return a color. It asks the surface: \u0026ldquo;How do you reflect light?\u0026rdquo;\nThis leads to a recursive process. The color of a pixel is not a static value; it is the sum of the light gathered by a ray as it bounces around the scene.\n1. Diffuse (Lambertian) Surfaces Matte surfaces, like a brick or a piece of paper, scatter light in random directions. In our code, when a ray hits a diffuse surface, we generate a new ray starting at the hit point and pointing in a random direction within a hemisphere aligned with the surface normal. We then recursively call our color function for this new ray and scale the result by the material\u0026rsquo;s albedo (its reflectivity).\n2. Metallic Surfaces Metal is different. A perfect mirror reflects light such that the angle of incidence equals the angle of reflection. We can calculate this reflected vector $\\vec{r}$ using the incident vector $\\vec{v}$ and the surface normal $\\vec{n}$:\n$$ \\vec{r} = \\vec{v} - 2(\\vec{v} \\cdot \\vec{n})\\vec{n} $$In practice, few metals are perfect mirrors. We can simulate \u0026ldquo;fuzzy\u0026rdquo; reflections by adding a bit of randomness to the endpoint of the reflected vector, controlled by a \u0026ldquo;fuzziness\u0026rdquo; parameter.\n3. Dielectrics (Glass and Water) Glass reflects and refracts. Some of the ray reflects, and some of it refracts at an angle given by Snell\u0026rsquo;s Law:\n$$ \\eta \\cdot \\sin(\\theta) = \\eta' \\cdot \\sin(\\theta') $$Where $\\eta$ and $\\eta\u0026rsquo;$ are the refractive indices of the two media. Implementing this in code requires handling the case of \u0026ldquo;Total Internal Reflection\u0026rdquo;—where the light is hitting the boundary at such a shallow angle that it cannot exit and must reflect. We also use \u0026ldquo;Schlick\u0026rsquo;s Approximation\u0026rdquo; to simulate how glass becomes more reflective when viewed at a grazing angle.\nDetailed Transactions: The Life of a Ray To better understand how a single pixel\u0026rsquo;s color is calculated, let\u0026rsquo;s look at the \u0026ldquo;transaction log\u0026rdquo; of a single ray as it bounces through the scene. Each bounce is a transaction between the ray and a material, modifying its energy until it either finds a light source or is killed by a recursion-limiting parameter.\nBounce # Material Hit Action Taken Color Attenuation Remaining Energy 1 Glass Sphere Refract (1.0, 1.0, 1.0) 100% 2 Metal Sphere Reflect (0.8, 0.6, 0.2) 80% 3 Matte Ground Scatter (0.5, 0.5, 0.5) 40% 4 Background Sky Terminate (0.5, 0.7, 1.0) 20% (Final) Implementation in Rust I chose Rust for this project for the same reasons I chose it for the Barnes-Hut simulation. It offers the performance of C++ but with some modern tooling and safety. And also (most importantly) because of fashion\u0026hellip;\nIn a ray tracer, you are doing a lot of linear algebra. You need a fast Vec3 implementation. In Rust, we can use operator overloading to make our math look like math.\nimpl Add for Vec3 { type Output = Vec3; fn add(self, other: Vec3) -\u0026gt; Vec3 { Vec3 { x: self.x + other.x, y: self.y + other.y, z: self.z + other.z, } } } impl Vec3 { pub fn dot(self, other: Vec3) -\u0026gt; f64 { self.x * other.x + self.y * other.y + self.z * other.z } } Fearless Parallelism Ray tracing is what computer scientists call \u0026ldquo;embarrassingly parallel\u0026rdquo;. Every pixel on the screen is independent of every other pixel. To calculate the color of pixel (10, 10), I don\u0026rsquo;t need to know anything about pixel (10, 11). Nevertheless this isn\u0026rsquo;t tackled in the original book implementation and I wanted to try and add it.\nIn many languages, adding multi-threading to a project involves a week of debugging race conditions and deadlocks. In Rust, I used the Rayon crate. But for Rayon to work, the objects in our scene must be thread-safe. This is where Rust’s Sync and Send traits come into play.\nSend guarantees that we can move our data between threads. Sync guarantees that multiple threads can safely share references to the same data. By marking my Hittable trait (the interface for anything a ray can strike) as Sync, I can share the entire world across all my CPU cores. The compiler ensures that our scene is read-only during the render, making the parallelization as simple as changing a standard iterator into a .into_par_iter().\nBelow is the main render loop using this paralled iteration technique.\n// Parallelizing the render loop with Rayon (0..image_height).into_par_iter().rev().for_each(|j| { let mut line_buffer = Vec::new(); for i in 0..image_width { let mut pixel_color = Color::new(0.0, 0.0, 0.0); for _ in 0..samples_per_pixel { let u = (i as f64 + random_double()) / (image_width - 1) as f64; let v = (j as f64 + random_double()) / (image_height - 1) as f64; let r = camera.get_ray(u, v); pixel_color += ray_color(\u0026amp;r, \u0026amp;world, max_depth); } line_buffer.push(pixel_color); } // Write line_buffer to file... }); Going from one core to sixteen cores on my machine reduced the render time from several minutes to just a few seconds.\nWASM: Bringing the Rays to the Browser One of the most compelling reasons to use Rust for a project like this is its first-class support for WebAssembly (WASM). In my Barnes-Hut post, I included a live demo running directly in the browser.\nFor the browser-side implementation, I used the Leptos framework. Leptos is a web framework for building reactive UIs (a Rust React). Using its component-based architecture, I was able to build the \u0026ldquo;Render\u0026rdquo; button and the canvas integration that displays the ray-traced result in real-time.\nHowever, moving from a desktop CLI tool to a browser-based renderer introduced a significant hurdle: parallelism.\nOn the desktop, as we saw, Rayon makes multi-threading trivial. But the browser\u0026rsquo;s execution model is fundamentally different. While Web Workers exist, they don\u0026rsquo;t share memory in the same way that threads do on a native OS. In a standard WASM build, you are effectively locked into a single-threaded world.\nThe Parallelism Paradox in the Browser In native Rust, Rayon uses a work-stealing scheduler to distribute tasks across all available CPU cores. In WASM, Rayon doesn\u0026rsquo;t work out of the box because the underlying primitives—threads and atomic memory operations—require a specific environment.\nTo get true parallel rendering in the browser, you have to navigate a maze of security requirements. You need to enable SharedArrayBuffer support, which requires setting specific \u0026ldquo;Cross-Origin\u0026rdquo; headers on your web server. Even then, you need a specialized toolchain like wasm-bindgen-rayon to bridge the gap between Rust\u0026rsquo;s threads and JavaScript\u0026rsquo;s workers.\nFor this initial foray into the browser, I opted for a single-threaded render. An image that takes 5 seconds on my desktop takes nearly a minute in the browser. Having many cores is great, and it\u0026rsquo;s much better when we can actually use them.\nResults The final result of the first book is the one I included on the top of this post. Going into the second book adds a second shape: a rectangle (which we turn to a box). This allows us to model a famous graphics benchmarking scence called the Cornell Box. We also add a direct light source to this scene as you can see below.\nThere are many subtle features to the render I didn\u0026rsquo;t describe in detail. In the first image we can see things like Texture Mapping which allow us to map a 2D image onto a 3D sphere, Motion Blur in which there is motion in the sphere that shows up as a smeared object, Depth of focus which is dictated by the size of the lens aperture. There are also some techniques to speed up renders by limiting the search space where we try to find an object that is hit by a ray. Here we implemented BVH (Bounding Volume Hierarchies) which is a spatial data structure (much like the QuadTree in my Barnes-Hut post) that allows us to skip large groups of objects, bringing the cost down to $O(\\log n)$.\nThere\u0026rsquo;s a lot more to explore in this world and I absolutly loved this project. Highly recommended. Check out the code, and the original book from which I took the implementation.\nrepo\n","permalink":"https://blog.winer.co.il/posts/rust-rays/","summary":"\u003ch1 id=\"making-a-photorealistic-image-in-code\"\u003eMaking a photorealistic image in code\u003c/h1\u003e\n\u003cp\u003eProgramming a computer to make a photorealistic image is a very cool party trick.\nThinking about the problem from first principles, it\u0026rsquo;s not clear what you need to get this thing working. I mean, there\u0026rsquo;s a clue in the name - it\u0026rsquo;s tracing rays. The gives us a clear indication that we will be following light rays along in a 3d scene, sure. But what was not clear to me is how much is needed, how much detail, when modelling the light-matter interaction to get a result that looks good. It turns out that the answer is: surprisingly little. Here\u0026rsquo;s what I made by following an amazing free three-book series by Peter Shirley, Trevor David Black and Steve Hollasch: \u003ca href=\"https://raytracing.github.io/\"\u003e\u003cem\u003eRay Tracing in One Weekend\u003c/em\u003e\u003c/a\u003e.\u003c/p\u003e","title":"Rust ray tracer project"},{"content":"This is a short post with very little real content. I wanted to test how easy it will be to make a snake game with GPT-5, using the Codex programming assistant (agent?) that is able to generate pull requests directly into a github repo.\nMy goal was to make something simple that compiles to wasm and then deploy it on a Hugo page (which is the publishing tool used to build this blog), just to see how easy it may be. Conclusion: it\u0026rsquo;s easy. I hardly read the code at all, so for all I know it may be very bad.\nTo make things more interesting, I asked it to add a 3d mode where the snake can wander around in 3 dimensions. There, I spiralled into a very bad time. Although the LLM was able to generate a working game (which was pretty impossible to control) there were tons of errors found upon deployment and I ended up spending about 2 hours trying to get the thing deployed properly. I ended up not doing the 3d version. I guess this is where the story ends for now. Pure vibe coding has its limits at the moment, and 3d-snake-wasm seems to be near the frontier.\nEnjoy what we were able to build!\n","permalink":"https://blog.winer.co.il/posts/snake-by-llm/","summary":"\u003cp\u003eThis is a short post with very little real content. I wanted to test how easy it will be to make a snake game with GPT-5, using\nthe \u003ca href=\"https://openai.com/codex/\"\u003eCodex\u003c/a\u003e programming assistant (agent?) that is able to generate pull requests directly into a github repo.\u003c/p\u003e\n\u003cp\u003eMy goal was to make something simple that compiles to wasm and then deploy it on a \u003ca href=\"https://gohugo.io/\"\u003eHugo\u003c/a\u003e page (which is the\npublishing tool used to build this blog), just to see how easy it may be.\nConclusion: it\u0026rsquo;s easy. I hardly read the code at all, so for all I know it may be very bad.\u003c/p\u003e","title":"Snake by LLM"},{"content":"I\u0026rsquo;m not a musician. I do enjoy listening to music very much and I tried picking up an instrument a few times. I never managed to keep at it for long enough to make anything you could call music. Being almost 40, it\u0026rsquo;s probably too late to really get into it in any serious way. However, I would say that at one point in my life I did play something that is a lot like an instrument: A laser. It turns out a laser is surprisingly similar to a unique electronic instrument called a modular synthesizer. Let me try and explain!\nThe analogy of musical instruments to lasers and also to other electronics equipment (Radio Frequency electronics are especially similar) is in my opinion a very strong one. The fundamental similarity is due to the fact they all manipulate oscillations. In music, the oscillations are of the air. In electronics, they are of electrons moving back and forth through conductors, and in a laser they are also related to the motion of electrons in the lasing medium, but those oscillations cause light to be emitted. It\u0026rsquo;s nice, to me, that the analogy reveals how there is as much of an art as there is a craft to the operation of these things. It\u0026rsquo;s not just the piano that requires an artistic flair, it\u0026rsquo;s the laser too.\nModular synths A modular synth looks like a high school electronics project or a 1920s phone switch board. It is flashing LED lights and electrical wires and wiggly oscilloscope traces galore. It has a super appealing aesthetic that\u0026rsquo;s somehow from the future and past as the same time. Just look at this picture\nIt\u0026rsquo;s a real beauty. Looks very messy but as these things go, a trained eye can make sense in the mess. It\u0026rsquo;s essentially a large electrical circuit whose knobs control the sound. The output of the entire thing is then fed to speakers or into a recording setup (or a computer).\nSubtractive synthesis The way sound is generated here is by Subtractive synthesis. This technique was invented by Robert Moog, who founded the famous company by the same name. The synth in the image above is a Moog synthesizer. Subtractive synthesis basically means you start out with a tone and then shape it in different ways. There\u0026rsquo;s an amazing video by David Hilowitz that goes into detail in a fun and musical way. I highly recommend it. I\u0026rsquo;ll do my best to give the gist here.\nThe start of the generation chain is a Voltage Controlled Oscillator which outputs a signal with a sine, square wave, triangle or saw-tooth envelope, at a given frequency.\nDifferent envelope shapes have a different spectral content and as a result have a different feel when perceived by the human ear and interpreted by the human brain. This is the first artistic choice the musician makes here. The VCO output can then be fed to additional modules. One example is the VCF, a voltage controlled filter.\nThe VCF is a circuit that can selectively remove frequencies from the input VCO signal. Its knobs are used to select which frequencies remain and which are removed, and kinda gives the subtractive technique its name. If you remove low frequencies, the bass notes, you remain with a higher pitch sound and if you remove the treble the overall feel is of a lower tone.\nAdditional shaping modules include the Envelope Generator, which can set the overall shape of the note that is being played. Although notes are mainly characterized by their central frequency (e.g. the note A aka \u0026ldquo;La\u0026rdquo; in the solfège system has the frequency 440 Hz) when you think of the different ways notes can be played on different instruments you realize they can produce sounds that linger or ones that end quickly. Sounds that begin explosively, or ones that slowly creep up. And sounds that end sharply or slowly die out. These parameters are apparent in the overall envelope as the Attack (how rapidly the notes starts), Sustain and Release parameters.\nThere are many more module types we can go into. For example a particularly interesting modules is called a ring modulator. This module mixes mixes frequencies together (adding or subtracting input signal frequencies to make new ones). This can, for example, make things sound very metallic with a fast \u0026ldquo;twang\u0026rdquo; that sounds like hitting a metal plate.\nThe space of possibilities is vast. As the true power comes from the fact sound manipulation can be composed together as the output of one module can be chained as the input to the next module. That\u0026rsquo;s the origin of the plug board appearance of the instrument and a big part of its overall appeal.\nOne final feature of the modular synthesizer I\u0026rsquo;d like to point out is the fact not only the sound signal is chained between the modules. Crucially, many of the control knobs that tweak the operation of modules can also be electrically controlled. A module called the Low Frequency Oscillator (LFO) generates a signal at a \u0026ldquo;low\u0026rdquo; frequency (here low means typically lower than the audio frequencies of hundreds to thousands of Hz) and that signal can be fed into the filters, amplifiers, envelope generators and so on. This is very useful in creating a dynamic feel to the whole musical piece that is hard to achieve manually.\nA laser This is a laser controller from my old lab at the Weizmann Institute. It actually controls a chain of lasers that start at an invisible infra-red and end with a very bright blue beam.\nLike the modular synth, this controller also has discrete modules which are interconnected by electrical cables. The similarities are not just visual though! Many of the modules in this rack, for example the one labeled ״scan control\u0026quot; is similar to an LFO. It tweak the parameters of elements in the laser. Depending on what exactly it is connected to that can change the laser frequency (color), intensity or other things.\nThe laser instrument extends beyond just this box though. If we open the box from which light actually comes out we can see more similarities to the analog synth.\nThis is the inside of a blue laser from the German company Toptica. It\u0026rsquo;s a fairly fancy one that can generate an output of over 1 W of continuous and beautiful blue light that appears super bright to the human eye and can make you blind in an instant.\nLight in this box starts its life in the bottom left corner, where I labeled the \u0026ldquo;VCO\u0026rdquo;. Like in the synth, you need to start with a waveform. Here that waveform is a very pure sine wave at many hundreds of Terahertz (we\u0026rsquo;re dealing with optical frequencies here, not audio but the idea is the same). Unlike the audio synth case, we can\u0026rsquo;t select a triangle wave here, just a sine. This VCO outputs an Infra-red invisible (but very powerful) laser. The blue light that comes out in the end will blind you, this infra-red will blind AND burn you.\nIf you follow my crudely drawn line you see it passing through a few components. The big black cylinder is an optical isolator. It acts like a electrical diode for light, allowing light to flow in one direction but not back to where it came from. This is needed to prevent the laser from reflecting back to its source and burning it like some freaky spiderling that eats their mother after hatching. After passing through the isolator and bouncing off mirrors, the powerful red light goes into a box labeled Amplifier. This increases the light intensity and is like a synth module we didn\u0026rsquo;t discuss but you can guess what it does - a voltage controlled amplifier.\nFrom there the light goes through a small blue cylinder which has a crystal inside. This is an Electro Optical Modulator. The crystal can slightly (but quickly) change the color (frequency) of the light when it is acted on by an electrical signal. Again, this is very similar to what one would do in the analog synth, in one of the non-linear modules such as the ring modulator (non-linear means any change of frequency in the system, but especially when adding new frequencies).\nFinally the light goes into the frequency doubler. This is a module that also manipulates frequency but in a much more dramatic way. It takes red light and doubles it frequency by converting pairs of red photons into a single blue photon. This is a process called second harmonic generation and is the end of this optical \u0026ldquo;musical composition\u0026rdquo; if all you wanted to do is to generate one beautiful note.\nPlaying a symphony I missed one part out in this story. We don\u0026rsquo;t play a single note when playing music. And we don\u0026rsquo;t just lase willy-nilly without control when running a scientific experiment. So how is it that a modular synth, and a laser, can do complex things in time rather than just doing one thing? What is needed is a control system. For a synth there are parts that add memory and logic to the musical composition. Logic gates (AND, OR), Shift registers, Loopers and many more can be used to make a musical piece. In scientific experiments, depending on the experiment itself and how tightly its timing constraints are (do we need to control the lasers at 1s intervals? 1 micro-second? 1 nano-second?) the control system can become one of the most challenging parts of the entire experiment.\nTo finish this off, here\u0026rsquo;s piece by the YouTube creator, LOOK MUM NO COMPUTER. A real artist and a master of electronics who makes analog synths and many other things. I\u0026rsquo;ve been listening to this one endlessly. Enjoy.\n","permalink":"https://blog.winer.co.il/posts/analog_synths_and_lasers/","summary":"\u003cp\u003eI\u0026rsquo;m not a musician. I do enjoy listening to music very much and I tried picking\nup an instrument a few times. I never managed to keep at it for long enough to\nmake anything you could call music. Being almost 40, it\u0026rsquo;s probably too late to\nreally get into it in any serious way. However, I would say that at one point in\nmy life I did play something that is a lot like an instrument: A laser. It turns out a\nlaser is surprisingly similar to a unique electronic instrument called a modular\nsynthesizer. Let me try and explain!\u003c/p\u003e","title":"Analog synthesizers and lasers"},{"content":"A beautiful plot The aesthetics of scientific plots is one of those things. You don’t have to know what makes something beautiful to recognize that it is beautiful. I’m not sure what provides a plot with that oomph, or je ne sais quoi. Is it just a great Signal-to-Noise Ratio (SNR) that makes the eye sense that something real is being presented? It’s probably more than that! A straight-line plot with great SNR is not usually particularly beautiful. I’d suggest that some complexity has to pop up too, to make the viewer feel there’s interest here beyond the trivial. I’m sure you could train a neural network to sort pretty plots out from non-pretty ones but… whatever… This plot below, as I’m sure you’d agree is very pretty indeed. Tragically, due to very silly psychology reasons (and also maybe the follies of the Scientific Method), it has never seen the light of day until now.\nDone is better than perfect So why wasn’t this plot ever released? An unhealthy mix of perfectionism and procrastination, both arising from that most potent and primordial emotion that is Anxiety. It is triggered by so many different things, and once that dopamine-fueled, lizard-brain targeting snowball gets rolling it’s hard to stop it. In addition, which is a point I mostly see now in retrospect, I was just overwhelmed.\nI was afraid I don’t really understand everything about the plot (which I don’t). I was worried that checking any more deeply is going to expose that it’s all a lie and the plot will have to be thrown away (which it might). It’s the fear of the experience of a child that presents a kindergarten painting to a parent only to see it promptly filed in the recycling bin rather than getting a prominent presentation on the in-house Louvre gallery that is the kitchen fridge.\nMoreover, I was having a hard time making a game plan for my next steps. There were more ambitious and lofty goals in store, and I felt committed to those as part of my PhD experimental plan. That was a plan that was presented to, and approved by, an external committee. This made it hard to make concrete decisions when looking ahead: should I meticulously write up this part of the experiment, adding supporting evidence and measurement as I go along, or should I just plow on forward? Who knows what should happen if I put too much time and effort into this side-quest? Will I ever reach my end goal? If not, then what’s the point?\nIrrespective of my confusion and short-comings, this is also a failure of modern academia and its peer publishing process. It takes a hell of a lot of effort to publish and that puts a fairly high barrier when it’s time to do so. It’s very likely many small side-projects never become known by others simply because of how hard it would be to fully write up something in a what is deemed “sufficient” quality. Would it not be better to just release more raw output in the hope it helps someone, somewhere?\nRegardless, this all amounted to a situation where things didn’t seem like they can become perfect and as a result were never fully done.\nRydberg spectroscopy for busy professionals So what\u0026rsquo;s this plot all about? It\u0026rsquo;s a measured absorption spectrum of a gas of cold Rubidium atoms, but one done in a cute (and maybe unreported) way.\nIn spectroscopy we measure how strongly a sample absorbs light as a function of the light\u0026rsquo;s color.\nThe absorption spectrum is a nice thing to know as it tells us about internal structure: how a sample (maybe a certain kind of atom) looks like on the inside. This is due that very fundamental and important principle: conservation of energy. Energy can\u0026rsquo;t just disappear, so if it is lost from a beam of light going into a sample, it must have gone somewhere. Most often: it changed how an electron orbits a nucleus or how a molecule vibrates.\nIn 2015-2020 I was doing my PhD in Prof. Ofer Firstenberg\u0026rsquo;s lab in the Weizmann Institute. I was working with a bunch of awesome people such as my great friend Dr. Lee Drori who took the data for this plot with me (and many others) and is also the inventor of the method itself. In this experiment we were interested in Rydberg atoms. Maybe I\u0026rsquo;ll get into why that was the case in some other post but for now that\u0026rsquo;s not so important.\nThe point with Rydberg atoms is that they have electrons comparatively very far from their nucleus to what one might consider normal. The electrons are thousands of time further away than what would be found in a \u0026ldquo;mildly\u0026rdquo; excited atom. The image below, which I took from one of the slides on my PhD defense, shows a Rubidium atom in two different states. The image on the left shows a nice and relaxed atom that hasn\u0026rsquo;t been bothered by anyone. It\u0026rsquo;s in the ground state which is indicated by a cryptic label \\(5S_{1/2}\\) which we shouldn\u0026rsquo;t worry too much about for now. The bright yellow in the plot is where you are more likely to find an electron in this state and the blue bits are where you\u0026rsquo;re not so likely to find it. In the image to the right, there\u0026rsquo;s a Rydberg state. It\u0026rsquo;s not so precisely defined what counts as a Rydberg state, but this one sure does. This time, it\u0026rsquo;s not a \\(5S_{1/2}\\) but \\(100S_{1/2}\\), the \\(100\\) being the key number that makes this thing \u0026ldquo;big\u0026rdquo;. It may seem at first glance that this is only marginally larger than the other, but if you look closely at the axes\u0026rsquo; scale you see we\u0026rsquo;re dealing with an atom that\u0026rsquo;s about 10,000 times the size of the ground state version of itself. It\u0026rsquo;s almost a micron in diameter, which is almost macroscopic!\nSense and sensitivity These large atoms are useful because the electrons can hardly feel the attraction of the nucleus at all. The electrical forces binding them together barely suffice to keep this system from an inevitable divorce. This weak attraction is actually useful in certain situations because it allows these atoms to be used as sensors. If every little thing is going to push you over the edge, you\u0026rsquo;re a great litmus test for, well, weak things.\nRydberg atoms can be used to sense very weak electrical fields and in our case we used them to create and observe quantum-mechanical phenomena. But as I said, that\u0026rsquo;s a story for another day.\nSo in this plot (if fact: plots), what you see is how much of the light is falling on a photo-diode when a laser frequency (read: color) is changed. Every single point on the blue line is a single reading where the frequency was changed and the amount of light falling on that diode was measured. Note, I didn\u0026rsquo;t yet say what light is on the photo diode, and it\u0026rsquo;s probably not what you might expect, and here lies the neat trick that is hidden here.\nEach dip represents a way in which the Rydberg atom can absorb light. So each one of those dips represent some energy difference for which the electron in the atom can take the light coming in and move around the atomic nucleus at some other way. It can rotate one way or another, move to some new average distance, be aligned in different ways with respect to the nucleus and so on. There\u0026rsquo;s A LOT that a trained eye can see by looking a this plot.\nThe different ways an electron can revolve around the nucleus are the electronic orbitals and the labels specify all the information one may want to know about them. It\u0026rsquo;s their unique ID. You can see these IDs for each dip on top of the main plots (a),(b) and (c). This makes these plots kind of a map, or a finger print of the Rubidium atom when it is excited (by the laser) into Rydberg states. And as it turns out, this isn\u0026rsquo;t always something you can just find in a book, and even if you could, the details of your experimental system may shift things around a little.\nThe great sensitivity of Rydbergs means that even if the light shining on them is too strong, it\u0026rsquo;s possible to destroy them by kicking out the electron (ionization). So measuring the Rydberg spectrum with a laser by simply looking at how much of a (very weak) laser goes through a cloud of them is not easy. It certainly wouldn\u0026rsquo;t produced a plot like the one above with nice deep dips and very low noise. So how did we do it?\nThe blue killer The trick Lee came up with was cunning. To make cold atoms you need to build a trap for them. This, Magneto Optical Trap (MOT) uses a special arrangement of electro-magnets and lasers to push atoms into one tight place in space. When it works, a ball of a few million atoms clumps up in this well defined place. What\u0026rsquo;s amazing about it (there\u0026rsquo;s many amazing things about it) is that because the laser light used is scattered by the atoms (that\u0026rsquo;s actually how you get to exert a force on an atom, but scattering light off it) you can see this blob of atoms with the naked eye. So there\u0026rsquo;s quite a bit of light there, relatively speaking to the power of light that brings atoms up to the Rydberg state.\nThere\u0026rsquo;s a few ways by which an atom can get to the Rydberg states, but one way is to have two lasers working together. Because there\u0026rsquo;s a lot of energy needed to bring the electron out so far, instead of making one energy that\u0026rsquo;s very high up in the electromagnetic spectrum (an ultraviolet laser, which would be inconvenient for many practical reasons) we can use two lasers that have more reasonable colors. One common combination people use is a red laser in conjunction with a blue one. This \u0026ldquo;ladder\u0026rdquo; configuration is shown in the image below where we get to the Rydberg state with two steps (that actually happen together).\nSo here\u0026rsquo;s the trick: if we use the red light from the MOT with a second laser that is blue, and if the combination of energies of these two laser energies is exactly the amount of energy needed to bring the atom from the ground state to the Rydberg state, then and only then, do we excite the atom to a Rydberg state. But then something great happens - the whole atomic cloud in the trap disappears! The MOT is killed almost immediately. This happens because, unlike the ground state atom, for technical reasons we won\u0026rsquo;t get into, the Rydberg atom is not trapped. That\u0026rsquo;s the \u0026ldquo;Blue Killer\u0026rdquo; effect and it is great because it give a huge, clean and easy to spot effect.\nSummary So that was that. A nice effect, with a lot more intricate tidbits to speak about if you really look in deep. We could say things about what happens to this effect as we go to really high Rydberg numbers, what electrical fields the atom may be experiencing and how they affect the positions of the dips (as I said, these are sensitive probes), and maybe even more interesting things. But, it just didn\u0026rsquo;t coagulate into a thing that was ever submitted to a peer reviewed journal.\nI took a few lessons from this unfinished paper. But I can\u0026rsquo;t say I don\u0026rsquo;t fall into many of the same pitfalls today: not sharing my work because I fear it\u0026rsquo;s just not good enough. But really, I know that this is a shame. It\u0026rsquo;s best to share, either you find out it\u0026rsquo;s much better than you thought or you find the error which is almost always much smaller than you feared.\n","permalink":"https://blog.winer.co.il/posts/the-unfinished-paper/","summary":"\u003ch2 id=\"a-beautiful-plot\"\u003eA beautiful plot\u003c/h2\u003e\n\u003cp\u003eThe aesthetics of scientific plots is one of those things. You don’t have to know what makes something beautiful to recognize that it is beautiful. I’m not sure what provides a plot with that oomph, or \u003cem\u003eje\u003c/em\u003e \u003cem\u003ene\u003c/em\u003e \u003cem\u003esais\u003c/em\u003e \u003cem\u003equoi\u003c/em\u003e.\nIs it just a great Signal-to-Noise Ratio (SNR) that makes the eye sense that something real is being presented?\nIt’s probably more than that! A straight-line plot with great SNR is not usually particularly beautiful.\nI’d suggest that some complexity has to pop up too, to make the viewer feel there’s interest here beyond the trivial.\nI’m sure you could train a neural network to sort pretty plots out from non-pretty ones but… whatever…\nThis plot below, as I’m sure you’d agree is very pretty indeed. Tragically, due to very silly psychology reasons (and also maybe the follies of the Scientific Method), it has never seen the light of day until now.\u003c/p\u003e","title":"The unfinished paper"},{"content":"Program, run, repeat Software development, like many creative activities, is an iterative process. You try to figure something out, have an initial thought about the correct way to go about things and give it a first go. In quantum algorithm development the same principle applies. You should start with a small model, see if it works and then work your way up in complexity.\nAt the moment you can’t easily access a quantum processor with more than a few tens of qubits. Even for these smaller devices, queue times can be very long and larger systems are not freely accessible. Nothing is worse than developing your algorithm, sending it to sit in a queue for a week and then coming back to find the result is nonsense. Often you end up with a bunch of zeros just because you forgot an H gate somewhere (or your idea was just plain bad!)\nSo what should you do? Simulate of course! Simulating a quantum circuit can only be done for limited circuit sizes. This is, in fact, very good. If we could easily simulate any quantum circuit, quantum computers would be quite pointless and the vibrant quantum computing ecosystem wouldn’t exist! (also, I wouldn’t be paid to play with these things and that would be sad).\nIt’s worth reminding ourselves why simulating quantum circuits is difficult. The number of possible states scales exponentially with the number of qubits. Because we need to store the quantum state on our digital (classical) computer, we very quickly run out of memory with which to perform the simulation.\nDespite the annoying fact that only fairly small circuits can be simulated, it’s still very useful to perform small scale simulations. This makes seeing where our ideas lead much quicker and more efficient. Moreover, simulators can give you access to the entire state vector and its complex coefficients. This is information that no quantum experiment can ever fully give you and this can be very useful in debugging algorithms as they are being developed.\nQuantum Simulators Quantum states are represented by complex-valued vectors in Hilbert space. A quantum state is transformed to another quantum state by a Hermitian matrix. These two statements contain a lot of what you need to know about quantum mechanics in general. This convenient mathematical formulation makes the simple way to simulate a circuit very intuitive. You store the initial state as a vector of complex numbers, represent each quantum gate as a matrix and then successively multiply the state by the matrices you encounter as you go through the circuit. Matrix multiplication is a problem which has quadratic complexity. So we have exponential complexity in space and a quadratic complexity in time and from the combination of these two, it’s pretty clear this is the type of problem that’s going to take a very long time to solve, even with quite small input sizes.\nState vectors and density matrices The brute-force approach described above works. It’s a bread and butter, as simple as you can do it, state-vector simulator. That’s not the only way though and it doesn’t even model some very important things. Noise, for example, which we know can never be avoided and is part and parcel of quantum computation, even in small circuits, is not treated here. If all matrices we use are Hermitian, we’re not modeling noise (decoherence) and that’s a pretty poor model.\nTo improve things we can use open-quantum system formalism. One way is to use density matrices instead of state vectors. A density matrix is an extension of the state vector which includes both pure quantum states (vectors in Hilbert space) and statistical mixtures of such pure states. The idea is that maybe we simply don’t know if we have one superposition or another: maybe it’s 30% chance of having one bell state and 70% of having another. It turns out that using this extension makes it convenient to model various types of errors. Simulators using density matrices are a bit more sophisticated then, at least in what phenomena they can model, but they’re just as poor in terms of performance.\nClever tricks There are many more ways to simulate quantum systems. Much more than can fit in a single post. It is instructive however to include one more simulator type because it gives us a small insight into how we can be clever and improve our computational performance, if only slightly.\nOne clever trick simulators can use, at least in SOME cases, is to take advantage of situations where the circuit is not highly entangled. Of course, without entanglement there is no added computational power in a quantum model. But it’s not always the case that all qubits are entangled with all other qubits. One such situation is when the circuit simulates interacting chains of particles where there’s only interaction between near neighbors, or some subset of the particles.\nIn such cases, when there is limited entanglement, the state can be broken into subsets of entangled states. It’s then possible to consider much smaller matrices than the full 2^n X 2^n ones needed in the basic state vector simulation. It can be shown that for the correct conditions of limited entanglement (what this means has to be explained precisely, but probably not in this post) the simulation can be performed with a cost that is less than exponential in terms of the number of qubits. This approach is known as a Matrix Product State (MPS) simulator.\nUnsurprisingly, if we know things about the expected properties of the quantum states we are working with, or limit the kinds of operations that we are performing, we can simulate larger quantum circuits than of we just brute force it all.\nClassiq simulations and the facts of life\nOn the Classiq platform, simulators are on an equal footing to real quantum hardware. You can build your algorithm and select a simulator to run on on the hardware selection screen. This allows you to quickly get insights into how the quantum algorithm behaves and if things make sense. We provide simulators from a range of vendors, both on the cloud and as part of the platform itself. Running on IBM, IonQ, Rigetti, Azure and AWS requires a user to provide credentials. However, users can simulate on Nvidia and Qiskit Aer simulators without providing any credentials and completely free of charge.\nOnce you are happy with the simulated results you can quickly and seamlessly switch over to real quantum hardware. Quickly and seamlessly apart from the long queue times that is. But that’s just something we will have to live with until we get more cloud based quantum computers.\n","permalink":"https://blog.winer.co.il/posts/quantum-sim-basics/","summary":"\u003ch1 id=\"program-run-repeat\"\u003eProgram, run, repeat\u003c/h1\u003e\n\u003cp\u003eSoftware development, like many creative activities, is an iterative process. You try to figure something out, have an initial thought about the correct way to go about things and give it a first go. In quantum algorithm development the same principle applies. You should start with a small model, see if it works and then work your way up in complexity.\u003c/p\u003e\n\u003cp\u003eAt the moment you can’t easily access a quantum processor with more than a few tens of qubits. Even for these smaller devices, queue times can be very long and larger systems are not freely accessible. Nothing is worse than developing your algorithm, sending it to sit in a queue for a week and then coming back to find the result is nonsense. Often you end up with a bunch of zeros just because you forgot an H gate somewhere (or your idea was just plain bad!)\u003c/p\u003e","title":"Quantum simulators and the Classiq platform"},{"content":"This post was originally posted on the Classiq blog. Written with Ariel Smoler.\nDepending on who you ask, the size of the cyber security market is currently (as of August 2023) estimated at a few hundred billion USD/year. It’s harder to estimate the size of the internet-of-things market as the definitions are more vague than those of cyber-security. Is a web-connected-toaster an Internet-Of-Things (IOT) device? Sure, maybe, but what about a Radio-Frequency (RF) identification tag with a microchip embedded stuck on an egg carton? Yeah, that’s probably IoT-related too. Coincidentally, both examples can be targets for cyber-attacks. However, it’s unclear which evil cyber-capable red-tailed fox will likely target that lowly yet delicious, internet-connected treat.\nWhatever the exact market size, both markets are unarguably huge and lucrative. The quantum computing market, though consummately precocious, is still much smaller. Let\u0026rsquo;s amicably estimate its size to be about 1% (or less) of that of the cyber market at about 1B USD/year. Nevertheless, the growth projections are staggering.\nSome of the projected growth results from how quantum computation capabilities are enablers of other markets. In particular, in today’s post, we will discuss how using quantum algorithms for combinatorial optimization, which can potentially outperform their classical counterparts, is useful in the industries of our exposition: IOT and Cybersecurity.\nA patch a day keeps attackers at bay. What is cybersecurity, then? It’s the art and craft of preventing malicious actors from gaining access to things you’d rather they don’t have access to—your data, your computers, and your toaster. The tricky thing is that we don’t often have disconnected computer systems these days. Everything is meshed via a network, essentially a large collection of interconnected assets.\nEach asset on a network either is software or has software associated with it. A sensitive database contains information you want to keep to yourself and runs on a specific database vendor and version. An oversight in the design of the network layer of some operating system has left a gaping hole someone can exploit to turn your computer into a distribution node for unauthorized Harry Potter fan fiction; the list goes on.\nAs with most complex systems, computer systems will always inadvertently allow for use not intended by their original designer. Luckily, for computers, we can deploy patches closing any known holes. Correctly patching assets on a network is one of the most critical factors for a healthy cyber-secure network. It turns out that’s not a trivial task.\nThere are many difficulties in hermetically sealing all known issues using patches. First and foremost, the sheer number that needs to be tracked and applied is overwhelming. Below is the number of published and known issues collected on CVE, the online vulnerability register. The trend of increasing threat is clear, and the number is well in the tens of thousands. The fact that patching also needs to consider compatibility issues, can affect system performance, and has to be coordinated across a heterogeneous network makes it clear this is a challenging challenge to overcome.\nJust the right patch in all the right places. Physicist Eugen Wigner once commented that mathematics is “unreasonably effective” in the natural sciences. In layperson’s terms, It’s a pretty good idea to make a mathematical model if you want to solve a problem. So here, too, we probably should. The model presented here follows this original work and is a graph-theoretic model. We begin with some definitions.\nWe work with a graph linking assets and vulnerabilities. An asset is an item that has value for an attacker—for example, data stored in the system. The availability, consistency, and integrity of assets are to be preserved. A vulnerability is a flaw by which an attacker can exploit an asset. e.g., an incorrectly patched piece of software, weak password, poor network configuration, etc. A connection between an asset and a vulnerability means that the asset is susceptible to it. This computer has a silly password, that computer is running stock Windows XP from 20 years ago, and so on. Such links mean there are known ways to get access to that computer. Ways like guessing a password is “Password” (Which, inexplicably, is still a very probable guess).\nThe danger of having affected assets is that such “chain links” can be attached together. So, when combined, multiple and seemingly unrelated insecure links allow an attacker to move laterally through the network. The attacker can then establish a complete attack scenario for targeting their desired critical asset. Such a set of moves is illustrated in the image below. Though it sounds like the name of a particularly bad (particularly good?) Steven Segal film from 1998, stringing vulnerabilities together, is called a Kill Chain.\nExpensive things and where to put them We still want to patch a network. We really do. But we need to take a slight detour where we explain an essential mathematical concept and hint at how it may be used to secure an Internet Of Things application.\nAn example of an IOT scenario is where a large collection of communicating sensors is used. Wireless Sensor Networks (WSNs), as they are called, can be deployed when protecting a forest from wildfires (if deploying sensors sensitive to various environmental factors like humidity, temperature, and air pressure), monitoring industrial applications where many machines work in unison, for coordinating transport networks and in many more settings. The diagram from [1] below shows an 11-node WSN. Node number 1 is marked differently from the others as it controls the network and centrally collects information from it.\n[1] Yigit Y, Dagdeviren O and Challenger M 2022 Self-Stabilizing Capacitated Vertex Cover Algorithms for Internet-of-Things-Enabled Wireless Sensor Networks Sensors 22 3774 Online: http://dx.doi.org/10.3390/s22103774\nA WSN, like any other computer network, can be a target for attack. Maybe I own an ice cream factory, and my competition really wants to know at what temperature I make my mint-chocolate-chip. So, how would one protect their network?\nSome network nodes can be more intelligent than others. They can have monitoring capabilities allowing them to, for example, inspect the contents of information passed along the network (network packets) and create alerts if packets appear malicious. This type of application is called a Link-Monitoring scenario.\nIt’s not feasible to make all nodes of the WSN monitoring nodes. That would be an expensive solution in terms of actual cost per node and energy consumption. Imagine, for example, that your nodes are spread across a vast forest and run on battery power. In such a case, you may need to occasionally change batteries. Deploying energy-conserving nodes would make your life much easier. Making every node smart, power hungry, and expensive is not the way to go.\nWe now want to ask which nodes in our network “see” the most other nodes. The diagram below, also taken from [1], shows a selection of monitoring nodes indicated by a red color.\nIn graph theory, the selection of nodes that can “see” (i.e., are connected to) as much as possible of the entire graph is called the Maximum Vertex Cover (MVC) of the graph.\nFinding the MVC is not an easy computational task. It’s a problem that is technically classified as NP-hard. Such problems have solutions that are easy to verify (easy means you can check if a solution holds relatively quickly, at “polynomial time”) but is hard to find (meaning there aren’t any quick - “polynomial time” - solutions).\nEnter quantum computation. We can find an approximate solution to the MVC problem using a quantum algorithm!\nApplying the quantum patch Let’s hop back to the original problem we were chewing at: patch management. Our goal was to identify which patches we should apply to our computer network to destroy the most kill chains.\nMathematical graphs are data structures that quantum algorithms tend to like. There are algorithms for splitting graphs apart, ones for searching through them, and so on.\nThe graph we use to represent our predicament is technically known as a bipartite graph. That means it is a collection of nodes of two distinct types (assets and vulnerabilities) which are connected by edges. An edge represents the ability of an attacker to move from one asset to another by exploiting vulnerabilities available on that asset.\nLet’s represent the kill chain we showed above as a bipartite graph. The numbers 1,4,8 represent specific vulnerabilities, they don’t mean anything in particular. Just imagine it’s a specific vulnerability from a list of known ones.\nWe’d like to eliminate all vulnerability-to-asset edges from this graph, which is like saying we want to apply all patches to all assets and make everything 100% secure. But we can’t. So we need to find a strategy to prioritize vulnerabilities such that the kill chain is broken. In this simple example, eliminating vulnerability 4 breaks the kill chain.\nWe can introduce a method to see this. It may seem odd to further modify the graph for this simple example, but it will prove useful later in more complex scenarios. Let’s define the dual graph.\nThe Dual Graph looks at vulnerabilities and connects them directly if they are connected via an asset in the “original” graph. Here’s the dual graph for our example:\nOur intuition from before that removing vulnerability four will break the chain is now more palpable, and it will not be possible to move from 1 to 8.\nWhat if we have a more complex graph? Consider now a more involved example with more assets and more vulnerabilities. It’s still just a toy model compared to a real network with hundreds of thousands of nodes, but it’s already getting visually complicated.\nConverting this to its dual graph produces the following:\nAs a simplification, we built an undirected graph. This can be justified as we want to break the chain regardless of the direction and prioritize the most well-connected vulnerabilities. Also, it makes our lives easier, and it’s often better to start simple and progressively introduce complexity. Things are hard as it is.\nLooking at this graph, which nodes will be removed to break the most kill chains is not apparent. So what do we do? Find the MVC, of course!\nThe MVC of the dual graph tells us which patches we need to apply to disconnect the most unpatched nodes from each other, thus preventing vulnerability sequence. In other words, breaking kill chains.\nIn this case, the nodes marked in blue show the MVC solution. One should apply these patches to get the best protection for the minimum cost.\nThis solution can be converted back to its form as a bipartite graph, showing a much simpler form than what we have before. Crucially, moving from one vulnerability to another via assets is impossible. The chains are broken!\nMVC and quantum: The Classiq way\nWe now have a mathematically formulated way to tackle cybersecurity problems. In the link monitoring example and for kill chains, the key is finding the MVC. Easy. As we already mentioned, the problem is that finding the MVC is NP-hard. Because a real enterprise network can have 100,000 hosts or more, this is typically an intractable problem.\nFinding an approximate maximum vertex cover can be done on a quantum computer in sub-exponential time using a quantum algorithm, a workhorse of a class of combinatorial optimization problems tackled by quantum computing: QAOA.\nMVC is a typical example of a combinatorial optimization problem. The difficulty arises from the sheer number of combinations of items; there are typically some constraints that need to be met when selecting a particular combination, and there isn’t sufficient structure in the problem to be smart about how you make a selection. You end up having to check every possibility and find the one that works. In other words, finding a solution will cost an exponential amount of time in the size of the problem.\nQAOA allows you to be faster than that, or sub-exponential, at least if you are willing to find an approximation rather than an exact solution. We will not explain QAOA in-depth, though if the crowd wishes (hit us on the Classiq slack server) we may do it in the future. You can read many reviews online, e.g here.\nTackling combinatorial optimization problems with QAOA is one of the strong suites of the Classiq platform. There are three simple we need to take here:\nDefine the quantum model using Classiq’s combinatorial optimization capabilities. Use the Classiq engine to generate a parameterized quantum circuit Execute the circuit within the Classiq platform to obtain the optimal parameters representing the solution. The technique of building Classiq models involving QAOA and graphs is quite natural, as it is compatible with standard and open-source tooling. When using the Classiq Python SDK, a network graph is first built with the networkx library. The next step is to define the optimization problem. There are different optimization languages in existence. One general purpose and expressive way to do this in the Python ecosystem is to use the PyOmo language in Python. PyOmo is powerful and rich, and it allows users to express a large number of optimization models. Uniquely, PyOmo is tightly integrated with the Classiq model.\nThe following image shows a circuit synthesized by the Classiq platform, finding an approximate MVC for a simple dual network. This circuit can be easily run on real quantum hardware or simulators from the Classiq platform at the click of a button.\nConclusion Cybersecurity presents countless not-trivial problems that have the potential to wreak havoc on computer networks and cause costly or dangerous disruptions. Some of these challenges can be formulated mathematically and tackled with computational tools.\nThe size of modern computer networks, which we depend upon in our daily lives, can be immense, making solving some of the computational problems in cybersecurity impossible with normal computational resources.\nLuckily, there are tools in quantum computations that make finding solutions possible, but it may be more practical to generate quantum software to search for these solutions.\nThe Classiq platform presents the best and easiest way to apply state-of-the-art quantum algorithms to real-world problems using the standard tools that are used in optimization problems, like the PyOmo language.\n","permalink":"https://blog.winer.co.il/posts/kill-chains/","summary":"\u003cp\u003eThis post was \u003ca href=\"https://www.classiq.io/insights/kill-chains-the-internet-of-things-and-quantum-combinatorial-optimization-a-buzzword-salad\"\u003eoriginally posted\u003c/a\u003e on the Classiq blog. Written with Ariel Smoler.\u003c/p\u003e\n\u003cp\u003eDepending on who you ask, the size of the cyber security market is currently (as of August 2023) estimated at a few hundred billion USD/year. It’s harder to estimate the size of the internet-of-things market as the definitions are more vague than those of cyber-security. Is a web-connected-toaster an Internet-Of-Things  (IOT) device? Sure, maybe, but what about a Radio-Frequency (RF) identification tag with a microchip embedded stuck on an egg carton? Yeah, that’s probably IoT-related too. Coincidentally, both examples can be targets for cyber-attacks. However, it’s unclear which evil cyber-capable red-tailed fox will likely target that lowly yet delicious, internet-connected treat.\u003c/p\u003e","title":"Kill Chains, The Internet Of Things And Quantum Combinatorial Optimization: A Buzzword Salad"},{"content":"This post was originally posted on the Classiq blog.\nHave you ever solved a Sudoku puzzle? It was pretty popular at some point in the early 2000s. For some reason, everyone was solving them all the time. Mind you, this is a time well before smartphones. People just didn\u0026rsquo;t have better things to do. If you haven\u0026rsquo;t heard of it, in a Sudoku, the goal is to fill a 9X9 grid with digits such that in each row and each column, each digit appears only once. This type of puzzle is an example of a Constraint Satisfaction Problem (CSP). These are problems where you have to \u0026ldquo;fill in the blanks\u0026rdquo; with an item (e.g., a digit) from a set of possibilities (e.g., the digits 1 through 9) but not break some set of rules (e.g., no repetitions of a digit), and they\u0026rsquo;re more common than you\u0026rsquo;d think.\nThe class of CSP has other, arguably more interesting, instances in the form of games (any crossword puzzle), mathematics map coloring, and so on. There\u0026rsquo;s a reason why this whole thing is important to us, though, and it isn\u0026rsquo;t because we want to prove a mathematical theorem. It\u0026rsquo;s a lot more practical than that. At least if you\u0026rsquo;re in the business of quantum algorithm design.\nOptimization of Quantum Circuits There are no free lunches, but getting the best bang for your buck is nice. That\u0026rsquo;s the point of optimization techniques: finding how to maximize (or minimize) some problem metric. There are many types of such problems. Sometimes the thing you are trying to optimize takes continuous values, while other times, those values are discrete. Your optimization space can be either finite or infinite, and it can also vary in different ways. You have a set of constraints in CSPs, and you want them all to be met as best as possible. This sub-class of CSPs can be called a CSOP, where \u0026ldquo;O\u0026rdquo; stands for \u0026ldquo;Optimization.\u0026rdquo;\nWe want to design a quantum algorithm and make our building blocks \u0026ldquo;high-level.\u0026rdquo; So we want to work with something other than individual quantum gates. Instead, we\u0026rsquo;d like to use a function call that is then converted to the correct gate sequence. This step, during which high-level function calls are replaced by gate sequences, is called synthesis. It\u0026rsquo;s not the only thing that happens here, but for simplicity\u0026rsquo;s sake, let\u0026rsquo;s just consider one aspect. When synthesizing, we are constrained mainly by the quantum hardware. We have a finite number of qubits available, an upper limit to the circuit depth we can execute before noise overwhelms us, and so on.\nThe game we are left with is as follows: Allocate resources (qubit number, circuit depth, etc\u0026hellip;) to functions in a way that optimally meets specified constraints.\nMeeting your Goals, but the Ground is Moving In a simple case, each function call is a \u0026ldquo;mask,\u0026rdquo; which can be replaced by a fixed gate sequence. You place those sequences one after the other and get a circuit. That would be boring and give little wiggle room for synthesizing. In this case, the sequence of function calls uniquely determines the resulting circuit. Luckily something much more interesting and more difficult is actually happening here.\nThis is a more interesting problem than you might assume because there is more than one correct implementation for some functions. Different implementations come about for various reasons. One example is due to hardware-dependent implementations. These occur due to different native gate sets in trapped-ions quantum processors to those of neutral-atoms quantum processors (or other qubit types). Another is due to the connectivity map of qubits. These maps specify on which pairs of qubits conditional operations can occur, and they vary between qubit chip architectures (even for a given qubit type).\nMultiple implementations are not just due to differences in hardware, though. One example is that of the adder circuit. This circuit element allows adding the numbers represented by different registers (sets of qubits). This is a fundamental component of any computer, including classical ones. It is equally valuable for quantum computations. Implementing this element has more than one representation as quantum logic gates. The purpose of the adder can be achieved by a ripple-carry circuit or by usage of the Quantum Fourier Transform (see this paper). Different implementations have different characteristics in terms of the number of qubits used, circuit depth, number of two-qubit operations performed, and so on.\nReduce, Reuse, Recycle Resources are scarce in computing, whether quantum or otherwise. Unless you are solving really easy problems, you typically run into one of two walls. You either don\u0026rsquo;t have enough speed or enough space. Often you can convert one of these problems into the other, but only sometimes, and it only sometimes helps.\nFor this reason, developing methods for making frugal use of resources is an old (and necessary) tradition in computing. Efficient memory management or garbage collection is essential and allows programs to repeatedly reuse the same bits of physical space in a program. Automating these tasks has been the domain of Electronic Design Automation (EDA) software which was a source of inspiration for CLASSIQ’s engine.\nWhen synthesizing quantum programs, one particularly tricky piece of the puzzle is what to do with auxiliary qubits. Again, let\u0026rsquo;s focus our attention on what the synthesis engine has to do.\nThe engine selects an implementation of the high-level function and then \u0026ldquo;connects\u0026rdquo; it to the following function implementation. If that function has an auxiliary qubit (or qubits), meaning one that was used and is now irrelevant, it can be reused by another function. If it cannot be reused, and the following function also needs an auxiliary qubit, we will have to allocate a new qubit to that function (which we may not have handy!).\nThis reuse of aux. qubits makes the Constraint Satisfaction (and optimization) problem we are trying to solve \u0026ldquo;non-local,\u0026rdquo; in some sense. What if selecting one implementation (and auxiliary qubit wiring) at the beginning of the circuit, which seemed to be a good idea at the time, forced me into choosing a less suitable implementation later on? One that is overall worse for optimizing my target? You make a choice, and the end goal moves in one direction or another, making you reevaluate your decisions and change them retrospectively. If only real life had given you such liberties.\nSimple Strategies for Success There are many ways to arrange functions and allocate resources to each one. The number scales exponentially with the number of functions, as quantum things tend to. That means that by a brute force approach, you won\u0026rsquo;t solve the problem if you have many function calls unless (maybe) you already have a working quantum computer to speed up searching through the possibilities efficiently. You\u0026rsquo;d still need to store them somehow, so even a quantum computer may not be enough!\nThere are many ways to tackle this difficult problem. The first approach, which is slow but at least methodical, is by backtracking algorithms. This is an incremental (recursive) way to build the circuit layer-by-layer. To improve on this, there are better algorithms, some based on clever heuristics. We will say more on such techniques in future blog posts.\nWhat Next? Today we got a small glimpse into the world of CSP and how it applies to the problem of quantum algorithm circuit synthesis. Or, as we call it at CLASSIQ: bread \u0026amp; butter.\nWe learned about the problem setup and what could naively be done to start tackling it. But the CLASSIQ engine takes these ideas much further. Beyond applying more intricate and clever ways to make block implementation selections quickly and efficiently, there are extensions of the problem due to the specific challenges brought about by the quantum nature of the circuits we need to build.\nHow would you solve a CSOP when you can perform mid-circuit measurements? How should considerations such as coherent and incoherent noise be integrated into a synthesis engine? How can information processing in a hybrid classical-quantum computation workflow feed, at execution time, into circuit synthesis or re-synthesis? These are all difficult and fascinating questions. In future posts, we will dive deeper into the world of circuit synthesis and the algorithmic challenges it poses.\n","permalink":"https://blog.winer.co.il/posts/the-classiq-engine/","summary":"\u003cp\u003eThis post was \u003ca href=\"https://www.classiq.io/insights/the-classiq-engine-i-can-get-some-satisfaction\"\u003eoriginally posted\u003c/a\u003e on the Classiq blog.\u003c/p\u003e\n\u003cp\u003eHave you ever solved a Sudoku puzzle? It was pretty popular at some point in the early 2000s. For some reason, everyone was solving them all the time. Mind you, this is a time well before smartphones. People just didn\u0026rsquo;t have better things to do. If you haven\u0026rsquo;t heard of it, in a Sudoku, the goal is to fill a 9X9 grid with digits such that in each row and each column, each digit appears only once. This type of puzzle is an example of a Constraint Satisfaction Problem (CSP). These are problems where you have to \u0026ldquo;fill in the blanks\u0026rdquo; with an item (e.g., a digit) from a set of possibilities (e.g., the digits 1 through 9) but not break some set of rules (e.g., no repetitions of a digit), and they\u0026rsquo;re more common than you\u0026rsquo;d think.\u003c/p\u003e","title":"The CLASSIQ Engine: I CAN get some satisfaction"},{"content":"So many qubits, so little time At the time of writing this there are at least five competing technologies vying for the quantum computing throne. Superconducting qubits, trapped ions, neutral atom arrays and silicon (CMOS) qubits are the top contenders. The various so-called \u0026ldquo;color centers\u0026rdquo;, a prominent example of which are Nitrogen-Vacancy centers in diamonds, arguably lagging behind.\nUsing an eye-rollingly terrible expression, it\u0026rsquo;s \u0026lsquo;The Zoo of Qubits\u0026rsquo; 🙄. They only reason I am willing to use it is because I do believe some of them are cute but useless, some are scary and may bite you, and It\u0026rsquo;s very likely most of them will be extinct soon.\nHere I will give a brief introduction to what quantum dot qubits are. Although not currently as advanced as trapped ions and superconducting qubits, the supposed incumbent and leading contender for the qubit crown, they may well overtake them as they stand on some of the most solid of foundations.\nSolid state physics To start speaking about quantum dots, we must choose our own adventure. We can go left, deciding we don\u0026rsquo;t care (or we already know) about how to control the motion and orientation of electrons in semiconductors. That\u0026rsquo;s a much shorter path, but you will miss out on some intricate, albeit fun, physics. On the other hand we may decide to dive into the entirety of solid state physics, starting at page 1 of Ashcroft \u0026amp; Mermin and spiralling to a 5-piece series that will be read by no-one and will benefit even fewer people. Here, I\u0026rsquo;ll try to guide us through the long route, but doing it like an elite tour-de-france cyclist on a magnificent alpine road (Yeah, I\u0026rsquo;ve been watching the Netflix show about the Tour this weekend). Meaning: we\u0026rsquo;ll do it as quickly as possible and not pause to take in the views. Just the bare-minimum, as I perceive it. It won\u0026rsquo;t be a five piece series, I will do my best to complete is in just two.\nThe flow of electrons in a periodic forest To get us going towards understanding how electrons can be qubits, we need to figure out some things about electrons in electronic devices. We all know how electricity works. You get a metallic wire, a battery and a small light bulb and connect them in a circuit. The lightbulb turns on because electricity flows through it. How many electrons flow through the lightbulb in, say, a second? We can figure that out. A typical LED is happy with about 10mA of current through it. That\u0026rsquo;s 0.01 Coulomb/second or (rounding down) \\(10^{16}\\) electrons per second. One followed by 16 zeros of electrons every second.\nIn quantum information applications (i.e. Qubits) we care about very refined control where we address a single electron. We aren\u0026rsquo;t spray-painting here, we\u0026rsquo;re using the finest of tiny detail brushes. To whit, in the following sections we\u0026rsquo;ll paint a more refined pictures of how electrons move.\nThe classical picture In quantum mechanics we think of particles as waves sometimes. A wave, unlike a traditional billiard-ball particle, doesn\u0026rsquo;t have a position. Is the wave here? Is it somewhere? It\u0026rsquo;s an extended entity. It has different wave amplitudes at different positions and if you look at more than one wave arriving at the same position, they act as the sum of their individual amplitudes. Quantum mechanics tells us the probability of finding the particle (or particles) at a specific location is given by that amplitude squared. All this is fairly straight forward.\nOur goal here is to get some intuition for how electricity flows through a medium. There\u0026rsquo;s a simple but totally useful model for this called The Drude model after the 19th century German physicist Karl Drude. Here\u0026rsquo;s the model: electricity is the flow of electrons which we imagine to be billiard balls hopping about in a forest of immovable pillars.\nThe electrons are the blue circles, and they are moving between large red circles which are immovable. They are mobile and, of course, electrically charged. This means that if we apply a uniform electrical field (for example by connecting a battery across the opposite sides of the white slab) then they will start moving. In the image above, this is marked as their \u0026ldquo;drift velocity\u0026rdquo; \\(v_d\\) which points to the right. Because the electrons have negative charge you can see the electric field direction \\(E\\) and the electrical current \\(I\\) are actually pointing against the direction of the drift velocity. This is what we call \u0026ldquo;conventional current\u0026rdquo; which was chosen such that the direction of the current is out of the positive battery terminal and into the negative battery terminal. In retrospect, it\u0026rsquo;s backwards, but we\u0026rsquo;re stuck with it.\nWhat we get is moving particles that have some probability of colliding with an immovable ion and elastically bouncing off of it. When they do, their velocity changes, but as they are continuously acted on by the force due to the electrical field they eventually keep on trucking towards the positive battery terminal. Because this process has a stochastic (random) nature to it, it becomes most comfortable to start asking questions about things like the average momentum of the electrons.\nThis is a very simple idea, but it already contains a fair bit. For example, we can very easily extract Ohm\u0026rsquo;s law from here. See the wiki article if you\u0026rsquo;re interested. A lot more can be done with this simple starting point. That\u0026rsquo;s always the hallmark of a nice scientific theory. However, we can experimentally show Drude alone cannot be the complete description of how electrons behave.\nWhy do we need quantum electrons? How do we know the Drude model can\u0026rsquo;t be the whole story? Well one good clue is that when you cool some metals down their resistance to the flow of electricity becomes exactly Zero. Not close to zero, not \u0026ldquo;very small\u0026rdquo;, it becomes non-existent. How can that be, if we are thinking of Drude\u0026rsquo;s immovable ions? Well, it can\u0026rsquo;t be. Unless something very fishy is happening, for example if all electrons magically align in a neat line which just happens to miss every single ion. Unless they conspire in this way, there will be some chance of hitting an ion. And yet, electrical resistance can fully disappear.\nWe\u0026rsquo;ve known about this behavior for a fairly long time. The ability to cool thing to cryogenic (anything below 0 °C is technically cryogenic) temperatures dates back to the 19th century. Oxygen, which becomes liquid at -180 °C, was first liquefied in 1877. Liquefaction of Helium, which occurs at -269 °C, was first achieved in 1908 by the Dutch physicist Heike Kamerlingh Onnes. He then used this ability to measure the electrical resistivity of metals at cold temperatures and discovered that the resistivity of Mercury (the element Hg) disappears when placed in a bath of liquid helium.\nThere are other bits of experimental evidence that tell you the Drude model is incomplete. With modern eyes it shouldn\u0026rsquo;t be surprising, for example, that to give a more complete description of electronic motion we would like to include the wave nature of the electron. The fact electrons have spin as well as electrical charge should be accounted for, and we may want to bring more detail into the model to account for things like vibrations of the ions (though that isn\u0026rsquo;t a correction due to quantum theory and can be just slapped onto the Drude model with relative ease).\nWhat is a Solid? To move forward we need to pause for a moment and think about something quite basic. What\u0026rsquo;s a solid? What makes the \u0026ldquo;solid\u0026rdquo; in \u0026ldquo;solid state physics\u0026rdquo;? You can think of one of its high school definitions. A solid is the state of matter of an object which \u0026ldquo;opposes shear forces\u0026rdquo;. That means that if you can glue something to the ground and push it from the side, and that thing then tries to push back then it is a solid. Water can\u0026rsquo;t do that. Neither can Jell-O. That\u0026rsquo;s a fair description, but it doesn\u0026rsquo;t tell us anything about what a solid is microscopically? How does a solid look like at the atomic scale?\nThe way we learn to think about solids as physicists in our undergraduate training is as Crystals 💎. A crystal in this context means a periodic tiling of atoms. Crystallography is really rich and interesting and there\u0026rsquo;s tons to be said about it from physics-theoretical, mathematical and experimental points of view. It is a large body of work that has been explored for over a century but is ever evolving.\nIn the 1980s it was considered as a fundamental truth that crystals must be absolutely periodic and obey certain rotational symmetry properties. The idea was that a solid is an object where atoms (maybe of more than one variety) are placed on a grid in space. That grid can look like a chess board, it can look like a honey-comb or have some other repeating pattern in two or three dimensions.\nThis is a collection (source) of possible periodic tilings of atoms, shown as a single \u0026ldquo;unit-cell\u0026rdquo; which is replicated repeatedly to build the solid (crystal).\nThe rotational symmetry rule means you can take a unit cell, perform a rotation at some angle about some axis and get the exact same cell. It was thought to be a necessary condition for any valid solid structure, but it was then shown by experimentalist Dan Shechtman of the Technion in Israel that was not the case. This was a discovery for which he was awarded the Nobel Prize in Chemistry in 2011. I told you this story because it\u0026rsquo;s cool, but mostly because I wanted to get across the point that crystallography is rich and interesting.\nElectron waves in solids We\u0026rsquo;ve established (I hope) the picture of electrons in a solid is not that of billiard balls and immovable posts. What I do want us to think about next is the model where electrons are waves flowing through a periodic set of barriers. When electrons, described as waves, flow through periodic obstructions something magical happens.\nConsider a wave-breaker offshore in your favorite beach. Those long piles of big rocks placed in the water to protect the beach from most of the waves. The image below was taken at my beloved local beach in Herzeliya.\nLook at those waves going through the gap between adjacent breakers. There\u0026rsquo;s a set of circular wave-fronts expanding towards the beach. What is less apparent in the image is that there are also reflections. Waves came from the sea behind the wave breaker. A part of that energy went through but a part was reflected back. The waves coming in, the waves coming through and the waves reflecting back all add together to create the pattern we see on the surface of the water.\nIn typical physicist fashion, let\u0026rsquo;s imagine something entirely improbably. Imagine not one, but many wave-breakers. And imagine they are placed in a periodic array. In such a contrived case, interesting patterns start to emerge. The shape of the disturbance start becoming periodic itself. This is called the Bloch Theorem. Electron wave propagating obey this theorem and are referred to as Bloch Electrons. The details are intricate and crucial to making calculations, but we only wanted a whirlwind tour. So what\u0026rsquo;s the take home message here?\nThe key is that because of the dance of electron waves and ions in the solid, the wave function of the electrons is periodic. A periodic wave function behaves, qualitatively, like guitar strings and other systems that exhibit vibrations. Harmonic oscillators, drum faces and so on. Vibrations are very useful in physics because in such systems you start seeing things like discrete energy levels and other things I will expand on later. It\u0026rsquo;s this vibrational quality then, that is going to be key for building qubits but for now we will leave it at that.\nManufactured solids Today, because we are able to control single atoms, and single atomic layers, we have access to artificial (manufactured) crystal structures. An example of this is the one-atomic-layer thick sheet of carbon atoms called Graphene. This material and its properties have been actively explored since the 1990s, and also produced Nobel Prize laureates like Andre Geim and Konstantin Novoselov. More recently, the ability to stack such monolayers and even to change the angle between them as they are stacked (so-called \u0026ldquo;magic angle graphene\u0026rdquo;) has produced even more rich physics (see for example pioneering work by Pablo Jarillo-Herrero at MIT). This is exceptionally exciting stuff and I think I\u0026rsquo;ll try and write a post about this topic at some point.\nWith manufactured solids we can control our periodic Bloch electrons in more ways. For example, we can make layers of different materials, like a layered cake. This makes the periodicity that the electrons \u0026ldquo;see\u0026rdquo; above and below them different to what they see to their sides. Such a situation can make it \u0026ldquo;easier\u0026rdquo; for the electrons to flow within a layer than between layers. They end up flowing only in two dimensions and not in the third dimension. This is a useful situation called Two-Dimensional Electron Gas, abbreviated to 2DEG. There are other possibilities for controlling electrons that come about by being able to control the geometry and materials in the manufactured solid.\nSummary Our playing field is that of the solid. A periodic set of atoms which make up what is technically known as an atomic lattice. Electrons, which behave as waves and aren\u0026rsquo;t single point particles, can flow through the lattice and interact (interfere). The nature of the interference pattern that emerges is intimately linked to the lattice through which they flow. It becomes periodic itself, to match the period of the lattice. This picture was developed to enable calculations and engineering solutions to control the flow of electrons.\nOn the next post I\u0026rsquo;ll explain the experimental knobs we have to control electrons, and how those knobs enable the creation of devices where Quantum Information is manipulated. I will also try to explain why such devices hold great promise for achieving the scale needed for truly useful quantum computation.\n","permalink":"https://blog.winer.co.il/posts/silicon-qubits/","summary":"\u003ch2 id=\"so-many-qubits-so-little-time\"\u003eSo many qubits, so little time\u003c/h2\u003e\n\u003cp\u003eAt the time of writing this there are at least five competing technologies vying for the quantum computing throne. Superconducting qubits, trapped ions, neutral atom arrays and silicon (CMOS) qubits are the top contenders. The various so-called \u0026ldquo;color centers\u0026rdquo;, a prominent example of which are Nitrogen-Vacancy centers in diamonds, arguably lagging behind.\u003c/p\u003e\n\u003cp\u003eUsing an eye-rollingly terrible expression, it\u0026rsquo;s \u0026lsquo;The Zoo of Qubits\u0026rsquo; 🙄. They only reason I am willing to use it is because I do believe some of them are cute but useless, some are scary and may bite you, and It\u0026rsquo;s very likely most of them will be extinct soon.\u003c/p\u003e","title":"The basics of quantum-dot qubits (1): some basic solid-state physics"},{"content":"Apprenticeship The education system in the UK has an apprenticeship path built into it. This path allows young people who wish to finish the purely academic chapter of their studies at 16 to acquire vocational skills. I\u0026rsquo;ve never done an apprenticeship myself, but I was enamored by the idea when I heard about it as a schoolboy.\nHow I imagine an apprenticeship, which may be very different from what a British apprenticeship actually looks like, is like a movie montage. The young padawan becomes an expert via a progression of intricately designed and precisely exacting exercises. It\u0026rsquo;s the fantasy of having an all-knowing responsible adult thoughtfully guiding you. A very comforting dream which for me was also fuelled by books such as Shop craft as soul craft (which I read slightly later in life) and Zen and the art of motorcycle maintenance which I really loved as a high school-er.\nAs I learned through subsequent life experience, there is seldom a responsible adult. You need to fumble about (hence the name of this blog) and construct a coherent set of experiences that adds up. Nevertheless, \u0026ldquo;Zen \u0026amp; the Art\u0026rdquo; gives at least one piece of practical advice on how one should approach the journey:\nMountains should be climbed with as little effort as possible and without desire. The reality of your own nature should determine the speed. If you become restless, speed up. If you become winded, slow down. You climb the mountain in an equilibrium between restlessness and exhaustion. Then, when you\u0026rsquo;re no longer thinking ahead, each footstep isn\u0026rsquo;t just a means to an end but a unique event in itself. This leaf has jagged edges. This rock looks loose. From this place the snow is less visible, even though closer. These are things you should notice anyway. To live only for some future goal is shallow. It\u0026rsquo;s the sides of the mountain which sustain life, not the top. Here\u0026rsquo;s where things grow.\nBut of course, without the top you can\u0026rsquo;t have any sides. It\u0026rsquo;s the top that defines the sides. So on we go—we have a long way—no hurry—just one step after the next—with a little Chautauqua for entertainment \u0026ndash; .Mental reflection is so much more interesting than TV it\u0026rsquo;s a shame more people don\u0026rsquo;t switch over to it. They probably think what they hear is unimportant but it never is.\nA good project and good guides I\u0026rsquo;ve had a long-standing project with multiple false starts over the years. Actually, I have a proverbial drawer full of them, but in this one, the goal is to simulate the motion of many interacting particles on a computer screen. It\u0026rsquo;s natural for any physics student to consider doing this, as it has been a reference setup (at least in thought) since the early days of the discipline: the ideal gas, properties of solid matter, and the structure of galaxies.\nWhile reading about simulating the gravitational interaction of many bodies, one algorithm caught my eye and imagination. It\u0026rsquo;s not necessarily because it is particularly elegant or ingenious (though it ticks both boxes). Its output looks pretty, even if you don\u0026rsquo;t understand the details.\nThat, I think, is an essential factor when selecting a project. It should be sufficiently simple (and this one seemed to be simple because it was one of the first algorithms in the book), and it should be exciting (to you). This particular one had the added feature of being very visual, which is my preferred mode of thought.\nAnd so, with the omniscient adult in absentia, is a curious protagonist doomed to hack about like a drunkard? That may well be the case. But if you can find someone, preferably someone slightly more experienced, to walk with you, that\u0026rsquo;s significantly better. Some people enjoy walking alone. I like company.\nFor this expedition, I recruited not one, but two, much more experienced guides: Joe Lee Moyet and Matan Cohen(@matanco64). Each of these gentlemen has orders of magnitude more programming experience than I will ever have. I could say more kind words about Joe and Matan, but I\u0026rsquo;ll avoid this one tangent for now. Anyway, thanks, Joe and Matan! And for the rest of us, I recommend finding a good guide to help you up the mountain with as little effort as possible.\nThere are several take-home messages for me here, which made this project reach completion this time. It wasn\u0026rsquo;t too hard. I was excited about the topic. it fit well with what I feel is fun doing. I piggybacked another skill I wanted to learn (the Rust programming language) on top of something I was already excited about to get something of a two-for-one, and most importantly I had guides I wasn\u0026rsquo;t afraid to look stupid next to and who were at least as excited about the project as I was.\nThe computational complexity of many particles moving under gravity Our goal today then is to draw circles on a computer screen that appear to be gravitationally action upon each other. For that, let\u0026rsquo;s quickly recall some principles of mechanics and how one can find the equations of motion of interacting objects.\nTo compute the gravitational attraction of two particles you apply Newton\u0026rsquo;s law of gravitation.\n$$ m\\ddot{\\vec{r}}_1 = \\frac{G m_1 m_2}{|\\vec{r}_1-\\vec{r}_2|^3}(\\vec{r}_1-\\vec{r}_2)$$$$ m\\ddot{\\vec{r}}_2 = \\frac{G m_1 m_2}{|\\vec{r}_2-\\vec{r}_1|^3}(\\vec{r}_2-\\vec{r}_1)$$This is a nice set of equations as you can see both Newton\u0026rsquo;s second law and his third (action and reaction) participating.\nIf we only have two objects interacting, there is a close form solution. Observationally, the solution was formulated by Kepler in his laws of planetary motion. Newton was then able to find equations describing the motion from the basic principles he had laid down.\nIf you have three bodies, famously, there\u0026rsquo;s already no closed form solution. It\u0026rsquo;s the entire premise of a series of sci-fi novels, the first of which I highly recommend.\nThere are certainly no such solutions if you have \\(n\\) interacting bodies. This is because the system becomes chaotic, but I will not delve into the details of why that is the case here. You can read about it if you are interested.\nTo tackle the motion of many particles we must then do so numerically. The thing about gravitational attraction is that the range of the force is infinite. We see this from the equations above: the force only disappears when the distance \\(|\\vec{r}_1-\\vec{r}_2|\\rightarrow \\infty\\).\nSo if you want to calculate the force acting on each particle in a collection, you really have to take into account each individual particle. How many computations is that? About \\(n^2\\). You pick one particle out of \\(n\\) and then need to pick each of the other \\(n-1\\) particles left to make the force calculations. If you have \\(10^4\\) particles you need \\(10^8\\)) calculations. Mind you, that\u0026rsquo;s the number of calculations you need to make each time you update the position of your \\(n\\) bodies. And, indeed, you may want more objects on screen. Quadratic computational overhead is a dear price to pay.\nThe Quad Tree - a data structure for efficiently partitioning spatial detail The name of the game is to find a strategy for making a computation with quadratic computational complexity to one that is cheaper. Maybe logarithmic, probably \\(O(n\\cdot log(n))\\).\nThe first step is to efficiently store data about our particles. This can be done by taking advantage of the non-uniformity of particle distribution in our simulation space. There is often at least some structure. Namely, regions where they are more, or less, tightly packed together.\nA Quad-Tree is a two-dimensional extension of a binary tree. Instead of each node having Left or Right leaves, it has exactly four children representing the Northwest, Northeast, Southwest and Southeast equally subdivided quadrants of a square (the original node).\nDivide and conquer When building a quad tree, we decide on some particle capacity each quadrant is allowed. If the capacity is exceeded (as we add another particle), then a subdivision of the quadrant occurs, and we distribute the particles amongst the new collection. An example is shown in the image below where we set the capacity (arbitrarily) to four particles and when there are 5 particles we must subdivide.\nAs the number of particles increases the number of repeated subdivisions must also increase, as in the following image taken from the wikipedia article on this topic.\nThe utility of the quad tree is two-fold. It has more detail (more nodes) where more particles are present, and it enables finding particles stored inside it in an efficient way (because it\u0026rsquo;s a tree structure). For these reasons it\u0026rsquo;s extensively used in computer graphics whether gravitationally attracting particles are involved or not (e.g in the old computer game Worms).\nIntegration of the equations of motion Writing down the force equations on every particle is straight forward, if computationally taxing. But even if we have infinite computational power, we need some rule for updating the velocities and positions of those particles. An update strategy for positions based on a set of difference equations, whether they are equations of Newtonian mechanics or any other differential equations, is a (numeric, discrete) integration scheme.\nThe most basic way to do this is the Euler method. Updating the velocity vector of each particle based on its acceleration, and then updating the position vector based on its velocity.\n$$ \\dot{\\vec{r}_i}[t]=\\dot{\\vec{r}_i}[t-1]+\\ddot{\\vec{r}_i}[t-1]dt $$$$ \\vec{r}_i[t]=\\vec{r}_i[t-1]+\\dot{\\vec{r}_i}[t-1]dt $$How bad is this technique? We can figure it out by comparing to a Taylor series expansion of the position vector to second order:\n$$ \\vec{r}_T = \\vec{r}[t-1]+\\dot{\\vec{r}}[t-1]dt+\\frac{1}{2}\\ddot{\\vec{r}}[t-1]dt^2+O(dt^3) $$The difference between this expansion and the Euler integration equations is the term \\(\\frac{1}{2}\\ddot{\\vec{r}}[t-1]\\). So we see that at each time step the error accumulates proportionally to \\(dt^2\\). After \\(n\\) steps, each of size \\(\\propto{1/dt}\\) the error is proportional to \\(dt\\). This isn\u0026rsquo;t awesome performance.\nA different scheme, which makes the error accumulation rate quadratic in \\(dt\\) is the velocity-Verlet algorithm, but there are others. The go-to algorithm used in such cases is the family of Runge Kutta (RK) methods. Specifically RK-4 is the default integrator in many computational numerics tools, such as Matlab.\nThe Barnes-Hut algorithm Finally, we get to the Barnes-Hut (BH) algorithm. The data structure we want to use is a Quad-Tree and we set the maximum capacity of a node to be exactly one. So nodes can either contain no particles, or exactly one particle. Attempting to add a second particle to a node triggers a subdivision, turning it into four separate nodes. After this step the particles will be redistributed amongst the newly created nodes.\nBut storing particles in an efficient data structure is insufficient if we want to speed up gravitational dynamics calculations. The crux of the BH algorithm, is an approximation. If a group of particles is far from a test particle, they can all be treated as a single particle (whose mass is the combined mass of the entire group) that is located at the group center of mass. This turns multiple force calculations to a single calculation. What is left is to define what \u0026ldquo;far-away\u0026rdquo; means in a quantitative way.\nThe recipe is as follows: first store all particles in a quad tree. Then select the first particle in the list and find its distance \\(d\\) from the center of mass of the root node, the node containing all particles in the simulation universe. The dimension of the side of this node is \\(s\\). Now calculate\n$$ \\frac{s}{d} \u003c? \\theta $$where \\(\\theta\\) is a constant between 0 and 1. If the left-hand-side of the equation is less than \\(\\theta\\) treat all particles as acting from the center of mass and perform just a single calculation. If not, go to the next node in the tree and repeat the calculation. The nice thing here is that if \\(\\theta\\)=0 the process converges back to the naive case: everything is, in some sense, close by.\nThis approximation glosses over the fact there can be structure inside a node. For example, all particles can be in one side of the node, or the other. A physicist may say it\u0026rsquo;s akin to a first term in a multipole expansion, but I won\u0026rsquo;t go into more detail for now.\nWhy Rust? That was it for describing this nice algorithm that takes a quadratic cost and makes it much cheaper. Now all that is left to say is why I wanted to implement this thing in Rust.\nWhen I first heard about Python it was sometime around winter of 2008. I had a student job testing software and all the seasoned programmers there were quite happy about doing a first or second big project in the language. I was a physics undergraduate and had barely done a basic programming course, taught in C. It was a bit over my head, and experience level, at the time to see what the fuss was about. It was easy to play with python, I could see that immediately. It wasn\u0026rsquo;t clear how this language was fundamentally different to other things at the time, but I was excited about putting time and effort into learning it because I guessed it\u0026rsquo;s going to be worth my while. Reading about the Rust language these days I feel very much the same.\nThe Rust programming language is a low-level language coming from Mozilla Research. It\u0026rsquo;s associated with all manor of superlatives, such as \u0026ldquo;memory-safe\u0026rdquo; and \u0026ldquo;blazingly-fast\u0026rdquo; but I\u0026rsquo;ll save writing about it\u0026rsquo;s differentiating factors for another day. Regardless of any technical wizardry which makes it worth-while to write in, when reading about Rust I get the same feeling I did about Python years ago. I\u0026rsquo;m not saying the languages are similar or that they have similar use-cases. Just that I get that same feeling about learning it.\nIndeed, beyond supposed prestige of knowing Rust in coming years it also ended up being technically worth it. The number of particles I was able to simulate while maintaining a useable frame rate goes well into the tens of thousands and more. Moreover, because Rust has tooling for compilation into Web-Assembly (WASM) it can generate files that run in the browser. This alone is, to me, reason to play with it and as evidence I include below the working application I teased for this entire post. Hope you enjoy!\nWASM - Web Assembly and a live example live demo\n","permalink":"https://blog.winer.co.il/posts/barnes-hut-in-rust/","summary":"\u003ch1 id=\"apprenticeship\"\u003eApprenticeship\u003c/h1\u003e\n\u003cp\u003eThe education system in the UK has an apprenticeship path built into it. This path allows young people who wish to finish the purely academic chapter of their studies at 16 to acquire vocational skills. I\u0026rsquo;ve never done an apprenticeship myself, but I was enamored by the idea when I heard about it as a schoolboy.\u003c/p\u003e\n\u003cp\u003eHow I imagine an apprenticeship, which may be very different from what a British apprenticeship actually looks like, is like a movie montage. The young padawan becomes an expert via a progression of intricately designed and precisely exacting exercises. It\u0026rsquo;s the fantasy of having an all-knowing responsible adult thoughtfully guiding you. A very comforting dream which for me was also fuelled by books such as\n\u003ca href=\"https://www.amazon.com/Shop-Class-Soulcraft-Inquiry-Value-ebook/dp/B00273BHPU\"\u003eShop craft as soul craft\u003c/a\u003e (which I read slightly later in life) and\n\u003ca href=\"https://www.amazon.com/Zen-Art-Motorcycle-Maintenance-Inquiry/dp/0060589469\"\u003eZen and the art of motorcycle maintenance\u003c/a\u003e which I really loved as a high school-er.\u003c/p\u003e","title":"Barnes Hut in Rust"},{"content":"This post was originally posted on qubit.il, the israeli quantum community.\nזה ההייפ של הרגע. זו הבהלה לזהב של שנות ה20 של המאה ה-21. כתבות מחמיאות על חברות פורצות דרך, גיוסי כסף גדול ע\u0026quot;י צוותים קטנים שמתגבשים להם בחללי עבודה ברחבי הארץ והעולם ואפילו כמה הנפקות. הטכנולוגיה היא אינפורמציה קוונטית. זו עשויה להיות המהפכה הטכנולוגית הגדולה ביותר מאז מהפיכת האינפורמציה הקודמת, זו שהולידה מחשבים אישיים, טלפונים ניידים ואת האינטרנט. בקיצור: בהחלט יש פה סיפור שיכול להיות ביג דיל.\nאת הסיפורים על איך מחשוב קוונטי עובד אפשר לקרוא באין ספור כתבות מדע פופולרי בכל שפה שנוח לכם לחשוב בה: עברית, אנגלית, ערבית או Python. זרקו שפה, מישהו בטוח כתב הסבר כללי לעקרונות שמאפשרים לקיוביט, הגרסה הקוונטית לביט של המחשב הקלאסי, לבצע חישובים בצורה מהירה. הרבה פעמים ההסברים האלה מסתכמים ב \u0026ldquo;המחשב הקוונטי בודק את כל האפשרויות בו זמנית ומגיע לתוצאה\u0026rdquo;. זה לא ההסבר הנכון והאמת שזו שטות גדולה. אבל כן יש הסברים לגיטימיים שמסבירים את העוצמה של החישוב בעזרת מערכות קוונטיות ואת העקרונות המתמטיים-פיזיקליים שמוכיחים את זה הסבירו מדענים רבים ובינם לא מעט ישראלים, למשל מדענית המחשב פרופ\u0026rsquo; דורית אהרונוב מהאוניברסיטה העברית.\nאז לא ניתן כאן עוד הסבר אינטואיטיבי-למחצה לעוצמת החישוב של מערכות קוונטיות. כאמור, את זה תוכלו לקרוא במקומות אחרים. במקום זה אם נניח שכולנו מקבלים על עצמנו ללא מידה בריאה של ציניות את בואם של המחשבים הקוונטיים העוצמתיים, וזו הנחה קשיחה למדי, נרצה לשאול \u0026ldquo;מה הן המשימות ההנדסיות המעניינות שמהנדסת (תוכנה או חומרה) מוכשרת יכולה להתמודד איתן בבואה לבנות מחשב קוונטי חדש?\u0026rdquo;\nמה יש למהנדסים לחפש בעולם הקוונטי לפי מחקר שטחי שמבוסס על הזכרון שלי משמות הקורסים שחברים למדו באוניברסיטה, אני קובע שגם אנשי מדעים מדויקים שלא למדו פיזיקה למדו לרוב קורס או חצי קורס בקוונטים. לפעמים קוראים לזה פיזיקה ג\u0026rsquo;, לפעמים זה פרק בתוך קורס אחר אבל את העקרונות מעבירים בצורה כזו או אחרת. כדרכו של ידע שעובר על-הדרך במטרה לעבור מבחן בסמסטר עלום, הוא כנראה לא הדבר שנשאר חד בזכרון שנתיים או שלוש אחרי.\nאם כך איך תשתלב האלגוריתמיקאית, זו שיודעת לפתור שאלות מהספר של קורמן גם אם יעירו אותה בשלוש בבוקר אחרי לילה של שתיה כבדה, במאמץ לבניית מחשב קוונטי? מה לגבי מהנדס החשמל שקרא לבנו \u0026ldquo;אות\u0026rdquo;, לביתו \u0026ldquo;אקראית\u0026rdquo; ולחתול \u0026ldquo;רעש\u0026rdquo; ומתכנן כרגע את הדור הבא של שבבי תקשורת במהירות גבוהה, איך הוא קשור? ואולי חשוב מכך: למה שזה יעניין אותם בכלל?\nהתשובה הקצרה היא שהם ישתלבו בהחלט. מעטים התחומים שיש בהם כרגע עושר רב יותר של בעיות חדשות ומלהיבות, בתחומי ההנדסה המסורתיים, שאפשר להתחיל ללבן מן היסוד. תארו לכם שהייתם יכולים להשתלב בצוות של פון-נוימן שבנה את המחשב האלקטרוני המתכנת הראשון, אי שם בשנות ה30 של המאה ה20, זה כמעט ככה.\nקוונטים למשוררי תוכנה תחום התוכנה הקוונטית של ימינו מרתק בגלל שהוא מניח את היסודות לכל מה שיקרה אח\u0026quot;כ. הסיפור של תוכנה הוא הרי סיפור של שרשראות-כלים ואבסטרקציות. אף אחד לא רוצה (או יכול) לכתוב בשפת מכונה על מחשב קלאסי, ואף אחת לא רוצה (או יכולה) לעשות משהו משמעותי על מחשב קוונטי אם תנסה לתכנת אותו ברמת השערים הלוגיים הבסיסיים שלו. כן, גם למחשב קוונטי יש שערים לוגיים כמו למחשב קלאסי. את המחשב הקלאסי אפשר לבנות לחלוטין משערי NAND, במקרה של המחשב הקוונטי יש סט אחר של שערים, אבל הבעיה של כתיבת תוכנה משפה עילית שמומרת איכשהו לתיאור חומרתי שניתן להריץ \u0026ldquo;על הברזלים\u0026rdquo; היא אותה בעיה. כלומר: צריך לבנות קומפיילרים.\nבניגוד לעולם הקלאסי, שם הטכנולוגיה כבר מגובשת למדי ומטרות הקומפילציה הן מעבדי אינטל, ARM ואולי מיקרו-בקרים אם אתן מתעקשות להיות היפסטריות של embedded, בעולם הקוונטי יש עדיין גן חיות של טכנולוגיות. יש כמה טכנולוגיות קיוביטים מתחרות ובלתי תואמות זו לזו באופן קיצוני. לא רק שהן לא פועלות באותם הקצבים, הן מממשות שערים בסיסיים שונים, קצבי השגיאות בהן שונים והדרך שבה מתמודדים עם שגיאות שונה גם היא. האם שרשרת הכלים תהיה שונה לחלוטין בין טכנולוגיות? האם יהיה אפשר להגיע משפה עילית לייצוג ביניים כמו LLVM (הצעה לייצוג ביניים קוונטי כזה נקראת QIR, מוזמנים לגגל)? כבר היום אנחנו רואים מספר גדל של שפות תכנות קוונטיות עם מטרות שונות (Q#, Qiskit, QUA, OPEN QASM ואחרות) אז חוץ מקומפיילרים, יש גם שאלות של תכנון של שפות התכנות בעצמן.\nבגלל ההטרוגניות יהיה צריך לתכנן אבסטרקציה מעל המחשבים כך שהם מציגים לשכבות שמעליהן את היכולות שלהם בצורה אחידה. צריך להבהיר אילו אלגוריתמים ניתן לבצע בהינתן חומרה קוונטית כלשהי. למשל, אם היינו מסתכלים על המפרט של GPU, הוא היה מספר לנו באילו טכנולוגיות עיבוד הוא תומך, כמה ליבות יש לו, כמה פוליגונים הוא מסוגל לעבד בשנייה וכו. כך גם במקרה הקוונטי ממש בימים אלה ממציאים את השמות האחידים למדדי הביצועים של הQPU (Quantum Processing Units). יש להם שמות כמו Quantum Volume (QV) או Circuit Layer Operations per Second (CLOPS) והם מנסים לתת סרגל אחיד שמודד מחשבים קוונטיים. אבל אפילו על זה אין עדיין הסכמה ברורה, אלגוריתמיקאי שמעדיף להשתמש במחשב קוונטי מבוסס אטומים קרים עשוי לומר שזו מטריקה שמוטה לעבר מחשבים קוונטיים מבוסס מוליכי על למשל. אז עוד בעיה מעניינת שאפשר לעבוד עליה היא איך מתכננים מטריקות ביצועים, איך מודדים את המטריקות האלה, וכמה טוב (במטריקה שנבחרה) צריך להיות המחשב הקוונטי כדי לבצע את החישוב שתרצו לבצע.\nעננים קוונטיים אז בעיה אחת קשה היא בעיה של קומפילציה משפה עילית אל שפת מכונה, וכאמור כל מכונה פה נוטה להיות שונה מאוד. כולן snowflakes יחידים ומיוחדים, אבל אנחנו חיים בעידן הענן. גם אם יש הטרוגניות במערכות החישוביות שאנחנו מריצים עליהן אלגוריתמים, זה ממש לא מעניין אותנו לרוב. אנחנו רגילים לראות ממשק אחיד ומאחור יש איזה ארגון שאחראי על תפעול כל הקסם הזה. זו הציפיה שלנו כמשתמשים וזו גם הדרך הכי כלכלית לנהל חישובים בסקלות גדולות. ככה יהיה נוח למשתמשים, וככה חברות הקוונטים יוכלו להרוויח את הכסף הגדול. השאלה המתבקשת היא: איך בונים דטה סנטר קוונטי? את הפתרונות התוכנתיים שAWS, Azure וחברותיהן מביאות אלינו היום כדי לקבל גישה למחשב אחד, שניים או יותר, להריץ עליהם קוד ולקבל את התוצאות בחזרה - מישהו יצטרך לבנות עבור מחשוב קוונטי. כבר היום AWS מספקים שירות בשם Braket. לAzure יש את ה Quantum Cloud שלה. אבל האמת היא שאלה פתרונות תפורים לפי מידתם של מחשב אחד או שניים מיוחדים ולא פשוט להוסיף מחשבים נוספים לדטה סנטר.\nמחשבים קוונטיים הם יצורים לא יציבים מטבעם ואחד האתגרים הקשים הוא לכייל אותם כך שהם יהיו מוכנים לעבודה. צריך למצוא דרכים לכייל את המחשבים באופן אוטומטי ולבנות דרכים אמינות ויעלות לניהול הענן הקוונטי והמחשבים שבו. סט היכולות ההנדסיות שנדרש פה מעניין במיוחד כי ציד בעיות קליברציה בקיוביטים מצריך גם הבנה פיזיקלית עמוקה של המערכות. מכאן אפשר לנחש שיוולד תפקיד חדש ומעניין במיוחד שאפשר לקרוא לו Quantum Dev Ops Engineer. מהנדסי תוכנה עם רקע בפיזיקה ועניין במערכות משולבות חומרה. A triple threat.\nקוונטים לאלופות VHDL ואשפי תכנון אנלוגי אם אתם פחות בקטע של JetBrains ומקלדות מכניות ויותר בקטע של PSpice ואוסילוסקופים אז יש בתחום הזה סוכריות מכאן ועד האקזיט של מלאנוקס.\nבקרה של קיוביטים זה עסק מעניין ולא פשוט. הטרנזיסטורים עליהם אנו בונים מחשבים אלקטרוניים עברו דרך ארוכה מאז ימיהם הראשונים במעבדות בל בניו ג\u0026rsquo;רזי. הם הפכו מהתקנים בגודל של טוסטר קטן לכאלה שגודלם נמדד בננו-מטרים. הם אמינים, מהירים וחסינים למדי בזכות יכולות יצור ושליטה פנומנליות. לעומת אלה, קיוביטים נמצאים בשלב מאוד מאוד בוסרי. מעבר לכך שאין אפילו טכנולוגיה שולטת אחת שממנה בונים קיוביטים, תהליכי הייצור בתוך כל אחת מהטכנולוגיות עדיין בתהליכי גדילה.\nכמו שכבר אמרנו, הבוסריות הזו גורמת לכך שכדי לבצע בקרה של קיוביטים בצורה טובה צריך לפתח כלים לכיול אוטומטי ותכוף של המערכות. מעבר לכך עצם השליטה בקיוביטים, שליחת האותות החשמליים הנכונים בזמן הנכון, דורשת יכולות לתזמון מדויק ומהיר – ברמת הננו-שניות. מערכות הבקרה חייבות גם להיות מסוגלות לבצע מדידות על המערכת ולהגיב בקצבים סופר-מהירים (זמן השהייה בין מדידה לפעולה, latency, שגם הוא נמדד בננו-שניות). כדי להצליח במשימה, מערכות בקרה קוונטיות חייבות להיות תפורות למשימה הייחודית הזאת. במשך שנים רבות נסיונאים בנו מערכות בקרה עבור עצמם, אבל המשימה הזו נעשית יותר ויותר מורכבת ככל שהמעבדים הקוונטיים נעשים יותר מסובכים.\nסט הכלים והמשאבים הנדרשים כדי לייצר מערכות בקרה שיוכלו לענות על צורכי השוק הופכים לכה גדולים, שמעבדות בודדות (וחברות שעוסקות באינפורמציה קוונטית) לא יוכלו או לא ירצו לפתח מערכות כאלה בעצמן. זו דוגמה קלאסית לשוק שמבשיל ומתחיל לייצר תת-התמחויות. האספקט החומרתי של תחום הבקרה הקוונטית מלא באתגרים ההנדסיים מגוונים ביותר. ההישג הנדרש הוא לספק אלקטרוניקה מהירה ורחבת סרט (קצבים שנעים בין הרצים בודדים לג\u0026rsquo;יגה הרצים רבים), ברמות רעש נמוכות ביותר (כי רעש מחבל בקיוביטים העדינים), בצפיפות רכיבים גדולה (כי צריך לשלוט במאות קיוביטים ובלי למלא אולם שלם בבקרים אלקטרוניים, ברמת סנכרון גבוהה ביותר ובצורה שתאפשר למשתמשים חווית שימוש נוחה). יש פה אתגרים בתכנון לוגיקה מורכבת, הנדסת RF איכותית (כי קיוביטים רבים עובדים בתחומי תדר של מיקרוגל, ובהתאם דורשים את סט היכולות הזה), זיווד וניהול תרמי מתקדם (כי בקרים לקיוביטים יכולים לצרוך גם קילו-וואטים רבים של אנרגיה ואת החום הזה יש לנהל) וכולי. בקיצור, המון כיף!\n","permalink":"https://blog.winer.co.il/posts/quantum_eng_cross_post/","summary":"\u003cp\u003eThis post was \u003ca href=\"https://www.qubit-il.com/post/%D7%9E%D7%97%D7%A9%D7%91%D7%99%D7%9D-%D7%A7%D7%95%D7%95%D7%A0%D7%98%D7%99%D7%99%D7%9D-%D7%9C%D7%9E%D7%94%D7%A0%D7%93%D7%A1%D7%99%D7%9D-%D7%A7%D7%9C%D7%90%D7%A1%D7%99%D7%99%D7%9D\"\u003eoriginally posted\u003c/a\u003e on qubit.il, the israeli quantum community.\u003c/p\u003e\n\u003cp\u003eזה ההייפ של הרגע. זו הבהלה לזהב של שנות ה20 של המאה ה-21. כתבות מחמיאות על חברות פורצות דרך,                                                                                           גיוסי כסף גדול ע\u0026quot;י צוותים קטנים שמתגבשים להם בחללי עבודה ברחבי הארץ והעולם ואפילו כמה הנפקות. הטכנולוגיה היא אינפורמציה קוונטית. זו עשויה להיות המהפכה הטכנולוגית הגדולה ביותר מאז מהפיכת האינפורמציה הקודמת, זו שהולידה מחשבים אישיים, טלפונים ניידים ואת האינטרנט. בקיצור: בהחלט יש פה סיפור שיכול להיות ביג דיל.\u003c/p\u003e","title":"Quantum engineering (in hebrew)"},{"content":"A very short intro to bosonic codes (Cats) This is an informal introduction to Bosonic qubits in circuit QED greatly inspired by: Atharv Joshi et al 2021 Quantum Sci. Technol. 6 033001\nBosons 🤡? It\u0026rsquo;s not strictly important we understand what bosons are. However, I know that at least for me seeing a funny word I don\u0026rsquo;t know is a distractor when reading something new. I just have to know what the word means. So let\u0026rsquo;s get it out of the way as quickly as possible.\nQuantum particles have quantum numbers associated with them. These numbers are labels. Quantities associated with the particle. Two examples for such labels are electric charge and spin. Charge is more familiar than spin, but I think most people reading this have good practical intuition on how we typically think of spin: an arrow pointing up or down and \u0026ldquo;glued\u0026rdquo; to the particle in some way. What we may not usually think about, if we\u0026rsquo;ve not had formal training in this field, is \u0026ldquo;how much spin\u0026rdquo; the particle has. This spin vector has a length which for reasons I will not go into (though it\u0026rsquo;s just algebra!) is measured in half-integer quantities. So you can half spin-\\( ½ \\) particles, spin-1 particles and so on (the units, btw, are multiples of \\( \\hbar \\) which is a fundamental constant with units of angular momentum. That\u0026rsquo;s besides the point though).\nTo avoid repeatedly writing half-integer and integer spin let\u0026rsquo;s give such particles names. Fermions are those particles with half-integer spin and Bosons are particles with integer spin. It\u0026rsquo;s not immediately apparent why this should be the case, but there\u0026rsquo;s a huge difference in the behaviors of fermions and bosons. It comes effect when you have more than one such particle.\nThis difference is articulated in something called \u0026ldquo;spin statistics\u0026rdquo; and it simply means fermions do not like occupying the same quantum state whereas bosons are quite happy to do so. It is also known, for fermions, as the Fermi exclusion principle. It\u0026rsquo;s incredibly important and is an essential basic principle behind all chemistry and the structure of atoms, molecules and the universe.\nExamples of fermions are electrons and protons. Examples of bosons are photons and compound particles, like (sometimes) a whole atomic nucleus. Bosons can also be things which are not particles at a first glance, which is where they come into play in the story of Bosonic codes.\nIn quantum mechanics particles and waves are interchangeable descriptions. For example, consider the photon. The photon is an \u0026ldquo;excitation of the electromagnetic field\u0026rdquo;. The way we describe the field is as a (what else) set of couple harmonic oscillators. These oscillators, like a guitar string, have modes of vibration, and we can count those. Mode number 1, number 2 and so on. Each mode has more energy than the last. The photon is simply that last excitation, the one with the least energy. The surprising fact is that there is such a thing to begin with! that you can\u0026rsquo;t have half an excitation as you would classically. But this is not important for us, yet again. What is important is how you identify such photon (boson) states. You can choose a basis, the Fock basis, which simply counts what excitation are present in your quantum state. This is a Fock state with n photons: \\(\\ket{n}\\). These are the modes we will be working with when describing boson codes.\nResonators A resonator is the name we give a device that can maintain an oscillation. An example of a resonator is a transparent glass donut which can have a photon circulating inside. This is called a \u0026ldquo;Whispering Gallery Mode\u0026rdquo; resonator, because it\u0026rsquo;s a similar effect to what happens in the Whispering Gallery of St. Paul\u0026rsquo;s Cathedral in London. Another, simpler, example is just two mirrors facing each other. This is the effect we get when we share our screen in Slack, and that screen shows the very Slack window to which we are sharing. This back and forth ping-pong \u0026ldquo;resonates\u0026rdquo; like an echo. Resonators for electromagnetic waves, sometimes called cavities (like cavities in your teeth), can be designed and manufactures in such a way that the waves lives in them for a very long time before decaying. A good way to say exactly how good a cavity is involves counting the number of full oscillations a wave can undergo before it decays to nothing. In a really good cavity this number can be as high as \\(10^{10}\\) or more. Some resonators are planar meaning they live in the plane where the electromagnetic waves propagate (they are 2 dimensional). Others are 3D cavities and, clearly, extend in a third direction. 3D cavities are bulkier but for some things are better. They are often used in boson code contexts.\nWe discussed how cavities are used in reading the quantum state of a superconducting qubit during our first quantum hardware seminars. The idea was we transmit a microwave pulse on a cavity and measure its reflection, both in amplitude and phase. In this current context things are going to be very similar.\nThe takeaway message from this section is: photons are bosons, they can exist in a resonator for a long time and there they can be described as occupying a bosonic mode of the cavity. One basis for describing such modes is the Fock basis, which is the photon-number basis. Of course , this is not the only possible basis.\nCreation and Annihilation Operators The algebra of how we work with Fock states is simple and elegant. An operator acts on a state, possibly changing it to a different state and/or adding a multiplicative (complex) pre-factor. The creation state working on a fock state operates as follows:\n$$ \\hat{a}\\ket{n} = \\sqrt{n}\\ket{n-1}$$$$ \\\\hat{a}^\\dagger\\ket{n} = \\sqrt{n+1}\\ket{n+1}$$If we combine these, we get the number operator (check for yourself):\n$$\\hat{n}\\ket{n} = \\hat{a}^\\dagger\\hat{a}\\ket{n} = n\\ket{n}$$So if you can implement this operator, you can measure how many photons there are in the cavity!\nKitten code Now we go on to designing a useful qubit encoding, built from cavity modes.\nConsider the following encoding where the logical qubit is defined in terms of a superposition of (bosonic) Fock states.\n$$ \\ket{0_L} = \\frac{1}{\\sqrt{2}} \\( \\ket{0} + \\ket{4} \\) $$$$ \\ket{1_L} = \\ket{2} $$On average, each of these states has the same photon number \\(\\hat{n}=2\\). You can check this by calculating the expectation value of the number operator for each basis state. We can also say that these states have even parity. They have a +1 eigenvalue when operated upon by the parity operator.\nThis selection of basis is quite clever for the following reason. What if, for example, we have a single photon lost. This is an error we can easily model with the \\(\\hat{a}\\) operator. If we start with a state with even parity \\(\\ket{\\psi_L} = \\alpha \\ket{0_L} + \\beta \\ket{1_L}\\) , what will happen to our stored qubits?\n$$ \\hat{a} \\ket{\\psi_L} = \\sqrt{2}(\\alpha \\ket{0_L} +\\beta \\ket{1_L}$$This has a different parity than the data states and is therefore an error. By mapping the error states \\(\\ket{1}\\) and \\(\\ket{3}\\) back to our logical qubits we can retrieve the original data. So we can recover from the error.\nYou will be within your full rights at this point to demand I explain several things. A few of these are:\nHow do you measure the parity operator? How do you measure photon number? How do you measure the \\(\\ket{1}\\)) and \\(\\ket{3}\\)) parts separately to retrieve the data? Is the data lost upon correction? And many more. But I will not answer these questions here and we can all ask them together during the upcoming seminar featuring Dr. Jérémie Guillaud from Alice\u0026amp;Bob.\nHowever, there is a very nice piece of motivation on why anyone should care about a kitten code. An alternative way to encode information such that we are able to correct for single qubit flips is the four qubit code. In such a code there are four physical qubits, 3 ancillary qubits and a considerable amount of supporting hardware. This includes couplers, readout resonators and wires. This is shown on the right hand side of the diagram below.\nCompare this to the left hand side of the diagram where there are two cavities and one qubit (transmon), all enabling the same level of error protection. neat.\nWhy is the transmon there, you may also ask, we only discussed the cavities. Well, here as well, I\u0026rsquo;m going to ask you to be patient and wait for the special talk.\nCat codes : A rotation symmetric code The kitten code is a code designed to correct photon loss. However, there are other kinds of error in the world. Ideally, you\u0026rsquo;d like to also correct for multiple photon loss as well as dephasing errors if possible. Cat codes are one example of a bosonic qubit, but unlike the kitten code above, it is not made of Fock states. Instead, the cat code is constructed from coherent states.\nCoherent states Coherent states are \u0026ldquo;the most classical\u0026rdquo; of quantum states. When people say this, they mean the following. Take a coherent state \\(\\alpha\\)) and put it off-center in a quadratic potential. The evolution of the center of mass of this state in the potential (how it oscillates in the potential) will be exactly like that of a ball moving in the potential well. So in a way it does not look quantum.\nAnother way to describe a coherent state is: the state that is an eigenstate of the annihilation operator.\n$$\\hat{a}\\ket{\\alpha}=\\alpha\\ket{\\alpha}$$If you want to write the coherent state in the basis of Fock states, you can do that of course. They are just two sets of orthogonal base states. You do need an infinite series though:\n$$ \\ket{\\alpha} = e^{-|\\alpha|^2/2} \\sum \\frac{\\alpha^n}{\\sqrt{n!}}\\ket{n}$$If you look closely, you can see this is the sum over a poisson distribution of the photon number. Deep.\nBack to cat qubits So instead of being constructed from Fock states, Cat-codes are constructed from coherent states. For example as:\n$$ \\ket{\\alpha} \\pm \\ket{-\\alpha}$$This is where the cat name comes from. We make a state with something (the coherent state) and something orthogonal to it, like the cat that is both dead and alive.\nOne last thing: Wigner distributions If you go to the Wikipedia page on cat states you will see something quite mesmerising. I stole it and put it down below:\nYou could watch this for hours in full procrastination glory. But\u0026hellip; What is it? It\u0026rsquo;s a Wigner distribution (Who?) which is a quasi-probability distribution (What?). I will now oversimplify, but I think it\u0026rsquo;s a good first pass on the topic. The W function takes complex valued function pairs, like position and momentum of a quantum particle, and maps those onto a real valued function. This way you can plot the state of the system and visualized its evolution. You can also observe phenomena such as interference, which you can see as fringes (oscillations) in the gif. This concept closely relates to the concept of phase space you may know from classical mechanics and electrical engineering.\nWigner functions and other related function are used in quantum optics and quantum information, and are especially popular with folks doing bosonic codes.\nConclusion Cat codes are encoding of logical qubits onto the states of an electromagnetic cavity. These encodings are designed to be robust against certain kinds of errors. The use of bosonic modes can enable the simplification of the QPU and improve scalability. The system still uses transmons or other superconducting qubit architectures, but there are less of them and we didn\u0026rsquo;t explain what they are there for. Another thing I did not say: we are in, or at least near, the realm of continuous variable quantum computing. Just in case you want some extra nighttime reading. Hope to see you all in Thursday\u0026rsquo;s talk!\n","permalink":"https://blog.winer.co.il/posts/catqubitpost/","summary":"\u003ch1 id=\"a-very-short-intro-to-bosonic-codes-cats\"\u003eA very short intro to bosonic codes (Cats)\u003c/h1\u003e\n\u003cp\u003eThis is an informal introduction to Bosonic qubits in circuit QED greatly inspired by: \u003ca href=\"https://iopscience.iop.org/article/10.1088/2058-9565/abe989\"\u003eAtharv Joshi et al 2021 Quantum Sci. Technol. 6 033001\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"bosons-\"\u003eBosons 🤡?\u003c/h2\u003e\n\u003cp\u003eIt\u0026rsquo;s not strictly important we understand what bosons are. However, I know that at least for me seeing a funny word I don\u0026rsquo;t know is a distractor when reading something new. I just have to know what the word means. So let\u0026rsquo;s get it out of the way as quickly as possible.\u003c/p\u003e","title":"Cat Qubits"},{"content":"How do you calculate 1+1 on a quantum computer? Uri Levy\u0026rsquo;s question There\u0026rsquo;s something about being part of a group that\u0026rsquo;s wonderfully transformative. You drink coffee with some people, day in, day out, for a few years, and start speaking a common language. You work with them, commute with them, and walk past them in the halls. And you end up being like them or at least trying to be. Working in the Weizmann Institute\u0026rsquo;s (WIS) complex system department was indeed a very transformative environment. There are many ways in which this place has a language of its own. There\u0026rsquo;s the science, of course. It\u0026rsquo;s the atoms, ions, lasers, magnets, resonators, and nonlinear crystals. It\u0026rsquo;s catching the end of someone\u0026rsquo;s sentence when walking by \u0026ldquo;\u0026hellip;and that\u0026rsquo;s just adiabatic elimination once again!\u0026rdquo;. But there\u0026rsquo;s more to that than just that. It\u0026rsquo;s also about how people reason about the world in general. How they explain themselves and question others. Encountering someone who thinks very clearly can be magical. I tried back then, as I do now, to emulate such figures. The school of thought of members of the faculty such as Nir Davidson, Ofer Firstenberg, Roee Ozeri, and others. It\u0026rsquo;s a school whose motto is always trying to distill an idea to its simplest and most condensed form and (usually) doing so kindly. Another of these figures is Dr. Uri Levy. A physicist who had roamed those halls as a young student. He then pursued a career in physics in industry, only to return once more as a moth to the flame. Uri has a way of asking questions that is like Socratic dialogue. They are delivered with quiet honesty but tend to find weak spots in the argument, like a stinger missile hitting a Russian tank. After leaving WIS\u0026rsquo;s comforts, I started working on quantum computers. Uri was curious and called me one day to ask, \u0026ldquo;So, I know quantum computers will break cryptography. But how do I even use one to calculate 1+1\u0026rdquo;? I promised Uri an answer for a while and kept putting it off. The rest of this post, which started with a long-winded (and superfluous) introduction, aims to address this question and to do so in the spirit of the complex systems department.\nNumber types and binary arithmetic Until the early 20th century, \u0026ldquo;A Computer\u0026rdquo; was a person doing the work of calculating things. Today, when we say this word, we mean an electronic device that performs computations using something stored on it physically. This information, a list of symbols representing ones and zeros not so dissimilar to lines cut into a clay tablet, can be implemented in many ways. For example, they can be on flexible plastic tape coated by a thin layer of iron. Segments of this layer can then be magnetized, with the magnetic North pointing one way or the other. Manipulating these stored bits of information, done by flipping the direction of the magnetic poles, is how we change what is stored on the tape.\nCounting with cogs Our brains are great at remembering every lyric of an album we listened to as a teenager. But we\u0026rsquo;re not good at keeping track of too many digits, especially when out of context. Some people can recall Pi to fantastic precision, but that\u0026rsquo;s a feat achieved by specific and targeted practice. A typical person only has a working memory of about 7 digits, but for me personally, it\u0026rsquo;s much less. This is at least part of why children use fingers when learning basic arithmetic. Keeping track of the digits, with their digits (which is why we call them digits, to begin with!). So external memory is essential when performing arithmetic. Still, to apply a mechanical process to do mathematics for us, we need to add some logic. You can do that in as many ways as your imagination allows, but using machines is an excellent way.\nI\u0026rsquo;ve recently visited the Musée des Arts et Métiers (Museum of Arts and Crafts) in Paris. In typical Parisian fashion, it\u0026rsquo;s this beautiful historic building. The spacious and bright exhibition halls include a converted gothic church which, as designed, evoke a gasp as you enter. Instead of a pulpit and religious iconography, a full-size Vulcain rocket engine (the kind used on the Ariane 5 vehicle) is as big as a small bus and at least as prepared to leave this earth as the biblical angel it replaced.\nThe museum holds all manner of historic scientific instruments and is well recommended. However, one relic I was particularly interested in is the Pascaline, Pascal\u0026rsquo;s mechanical calculator.\nThis machine has a panel with several dials, each inscribed with the digits 0-9. Like when writing a number on paper, each dial position encodes a different decade (units, tens, hundreds, and so on). This enables you to input a number into the calculator, which is a mechanical memory. Using it frees the user\u0026rsquo;s mind from remembering the first number of your computation. Having input a first number, the dials are repeatedly turned to input the number to be added or subtracted. The display will then show the result of the computation. Addition can be repeated multiple times to achieve multiplication. By the same token, subtraction can be repeated for division (with remainder). You do need to keep track of the number of times you added or subtracted manually, though (as far as I could tell).\nCounting with a dial is nothing fancy. If I set the dial to, say, 2 and then turn it by 2 positions, it will show the value \u0026ldquo;4\u0026rdquo;. I\u0026rsquo;ve successfully performed the computation 2+2=4. However, if the dial was at 9 and I now try to add 2, I will get \u0026ldquo;clock arithmetic\u0026rdquo;: 2+9=1. This is the result modulo 10, my counting basis. This is not how multi-digit numbers are added, though. We were taught, at a very young age, that we need to \u0026ldquo;carry the 1\u0026rdquo;. So when we add two digits, and the result is larger than 10 (or whichever number basis you may be working in), you must pass the overflowing digit to the following position to the left.\nHalf adder and full adder A half-adder is a device implementing this behavior, where the overflow digit is passed on (more commonly referred to as \u0026ldquo;carried over\u0026rdquo;). A half-adder takes two input numbers and outputs a sum and a carry digit. Two inputs, two outputs. More is needed for a functional adding machine, as you also need a carry-in digit (to check if the previous position has overflowed). A device with this more complex behavior having *three inputs (two numbers + carry-in) and two outputs (sum and carry-out) is called a full adder.\nIn the Pascaline, under the hood, a set of full adders is implemented with cogs. Each cog meshes with the ones before and after it such that a full rotation of the previous cog drags the next one by one position. Doing this smoothly is the heart of the innovation introduced here.\nCalculating with qubits The quantum full adder circuit It seems that all that remains to be explained is how to make a quantum full-adder. One which can be implemented with atoms, photons, or trapped ions. Here\u0026rsquo;s an example implementation of such a circuit:\nThis is a quantum circuit diagram. Each line is a quantum register, which in principal, can represent multiple qubits. In this example, each is a single qubit, so we are looking at a 4-qubit circuit. The elements appearing on the lines are quantum logic gates operating on the qubits. There are many kinds of these, but here they are all controlled-not (also called controlled-X) gates. Their operation is to flip 1 to 0 (and vice versa) if the control qubits are all set to 1. The solid dots are the control qubits, and the open circle with a plus sign is the target qubit. Note there can be multiple control qubits. I could go through the truth table of this circuit, showing this is indeed a full adder whose inputs are q0 and q1 and carry-in and carry-out registers are q2 and q3. Because this post is getting too long as it is, you can take it as an exercise or take my word for it.\nThere\u0026rsquo;s one crucial point to remember here. It\u0026rsquo;s a point that has nothing to do with how you compute 1+1 on a quantum computer and everything to do with what a quantum computer actually does. As I have already mentioned, each of the registers appearing on this diagram is a qubit. It\u0026rsquo;s not a bit, taking the values 1 or 0. It can take the superposition value \\(\\ket{q} = \\alpha\\ket{0} + \\beta \\ket{1} \\) where \\(\\alpha, \\beta\\) are complex numbers.\nSo this is how you would compute the addition of two binary digits encoded onto single qubits on a quantum computer. There are many open questions left. For example: how would you implement a two qubit gate like the ones shown in the circuit representation. But that\u0026rsquo;s a question I would prefer addressing in a separate post. Another question is: what if I want to add together numbers greater than 1 or 0, what then? In such a case, like in any digital computer, you\u0026rsquo;d need more (qu)bits to represent the number.\nClassiq and the automated synthesis of quantum circuits Building quantum circuits to implement some algorithm is a non-trivial task. The simple adder circuit I show above was synthesized using the Classiq platform. This is all a shameless plug, as Classiq is the company I work for. But I suppose a personal blog is one big shameless self-promotion anyway, so it makes sense.\nIn any case, the idea with algorithmic synthesis is to automate and abstract away some of the work at the level of single qubits, so you can actually do complex tasks.\nIn fact designing complex quantum circuits will become quite impossible once the qubit number becomes large. As an example, see what the Classiq platform generates for the same simple adder as above, but each input register has three qubits (so it can represent the addition of 7 +7).\nThe circuits quickly become unwieldy as the number of qubits increase. Requiring some measure of clever automation.\nUncomputation and garbage collection I described how quantum logic gates can be used to create a full adder circuit, but also tried to hint to the fact that\u0026rsquo;s not the whole story. If all you want to do is a single computation that does not feed into the next algorithmic step, there really isn\u0026rsquo;t any more to be said. But that would be a terrible use case for a quantum computer. Unless your arithmetic step is the last one in the quantum circuit, it feeds into other algorithmic building blocks. If that is the case, you will end up with some unavoidable housekeeping to do.\nSuppose we have qubits that are not representing data (the input registers) but help the computation in some way (sometimes known as auxiliary or helper qubits). In that case, they can become entangled with the computation. At first thought, so what? The issue is simply ignoring these qubits is like measuring them. If they are entangled with your computation results, that can affect the result you get. I may write a post on uncomputation at some point, but for now, I will just point to the wiki entry on it. If you know uncomputation is needed, you can extend the circuit and \u0026ldquo;unwind\u0026rdquo; the entanglement between auxiliary and data qubits. This way, they become independent, and you don\u0026rsquo;t need to care about measuring auxiliaries. Moreover, if you uncompute, those qubits become free again. They can be used as a resource in further work done by the quantum computer. This is another crucial example of what automatic circuit synthesis, as done by the Classiq platform, make quantum algorithm synthesis more tractable.\nConclusion Computing 1+1=2 ended up being far less exciting than one may have hoped. There\u0026rsquo;s no quantum magic in this implementation; it\u0026rsquo;s merely a full-adder. Much more can be said, though. We can discuss how quantum logic has to be reversible, unlike classic logic. We can explain how single- and multi-qubit gates\u0026rsquo; hardware implementation looks in different hardware platforms. We can muse about the intricacies of synthesizing circuits efficiently and comment on when running deep quantum circuits with many operations in sequence will be realistic. But all these things are best kept for another time.\n","permalink":"https://blog.winer.co.il/posts/arithmetic-on-quantum/","summary":"\u003ch1 id=\"how-do-you-calculate-11-on-a-quantum-computer\"\u003eHow do you calculate 1+1 on a quantum computer?\u003c/h1\u003e\n\u003ch2 id=\"uri-levys-question\"\u003eUri Levy\u0026rsquo;s question\u003c/h2\u003e\n\u003cp\u003eThere\u0026rsquo;s something about being part of a group that\u0026rsquo;s wonderfully transformative. You drink coffee with some people, day in, day out, for a few years, and start speaking a common language. You work with them, commute with them, and walk past them in the halls. And you end up being like them or at least trying to be.\nWorking in the Weizmann Institute\u0026rsquo;s (WIS) complex system department was indeed a very transformative environment. There are many ways in which this place has a language of its own. There\u0026rsquo;s the science, of course. It\u0026rsquo;s the atoms, ions, lasers, magnets, resonators, and nonlinear crystals. It\u0026rsquo;s catching the end of someone\u0026rsquo;s sentence when walking by \u0026ldquo;\u0026hellip;and that\u0026rsquo;s just adiabatic elimination once again!\u0026rdquo;. But there\u0026rsquo;s more to that than just that. It\u0026rsquo;s also about how people reason about the world in general. How they explain themselves and question others.\nEncountering someone who thinks very clearly can be magical. I tried back then, as I do now, to emulate such figures. The school of thought of members of the faculty such as Nir Davidson, Ofer Firstenberg, Roee Ozeri, and others. It\u0026rsquo;s a school whose motto is always trying to distill an idea to its simplest and most condensed form and (usually) doing so kindly.\nAnother of these figures is Dr. Uri Levy. A physicist who had roamed those halls as a young student. He then pursued a career in physics in industry, only to return once more as a moth to the flame. Uri has a way of asking questions that is like Socratic dialogue. They are delivered with quiet honesty but tend to find weak spots in the argument, like a stinger missile hitting a Russian tank.\nAfter leaving WIS\u0026rsquo;s comforts, I started working on quantum computers. Uri was curious and called me one day to ask, \u0026ldquo;So, I know quantum computers will break cryptography. But how do I even use one to calculate 1+1\u0026rdquo;? I promised Uri an answer for a while and kept putting it off. The rest of this post, which started with a long-winded (and superfluous) introduction, aims to address this question and to do so in the spirit of the complex systems department.\u003c/p\u003e","title":"Arithmetic on Quantum Computers"},{"content":"About me My name is Gal. I\u0026rsquo;m an experimental physicist from Herzliya, Israel, a lovely coastal town just north of Tel Aviv.\nI am married to Danit and we have two cute kids.\nI did my physics Ph.D. at the Weizmann Institute under the supervision of Prof. Ofer Firstenberg, where I was the first student in the lab. My main experiment involved cold atoms and quantum optics using Rydberg atoms. I was also passionate about writing scientific outreach articles at the time, something I would like to revive in this blog.\nBefore coming to Weizmann, I had worked on scanning optical microscopes for the semiconductor industry in Applied Materials. I also spent one magical summer at CERN, where I was helping out to develop tools for the ATLAS data processing pipeline.\nI have been happily active in the budding quantum computing industry for several years. I am primarily working on quantum control and quantum software and am very interested in improving the interface between quantum scientists and quantum computers.\n","permalink":"https://blog.winer.co.il/about/","summary":"\u003ch1 id=\"about-me\"\u003eAbout me\u003c/h1\u003e\n\u003cp\u003eMy name is Gal. I\u0026rsquo;m an experimental physicist from Herzliya, Israel, a lovely coastal town just north of Tel Aviv.\u003c/p\u003e\n\u003cp\u003eI am married to Danit and we have two cute kids.\u003c/p\u003e\n\u003cp\u003eI did my physics Ph.D. at the Weizmann Institute under the supervision of \u003ca href=\"https://www.weizmann.ac.il/complex/Firstenberg/home\"\u003eProf. Ofer Firstenberg\u003c/a\u003e, where I was the first student in the lab. My main experiment involved cold atoms and quantum optics using Rydberg atoms. I was also passionate about writing \u003ca href=\"https://davidson.weizmann.ac.il/authors/%D7%92%D7%9C-%D7%95%D7%99%D7%A0%D7%A8\"\u003escientific outreach articles\u003c/a\u003e at the time, something I would like to revive in this blog.\u003c/p\u003e","title":""}]