Is there an anti-control gate in Qiskit? - quantum-computing

I started play with the Qiskit, and can't find an anti-control not there. By "anti-control" I mean the gate is executed only for these states of the superposition, where control qubit is in the Zero state.
It's annoying to use a code like
circuit.x(control)
circuit.cx(control, target)
circuit.x(control)
I would rather prefer
circuit.acx(control, target)
On the circuits language, I want to use a control gate from
Is there a dedicated operation for it in the Qiskit?

You can make your own "anti-controlled" gate by specifying what to control an x gate on, using this method here.
I think this would look something like
anti_gate = XGate.control(ctrl_state='0')
circuit.append(anti_gate, [control, target])

Related

Is there a Qiskit function that allows you to see what qubit/quantum register said gate is attached to?

I am trying to find a way to know what named qubit/quantum register a quantum gate (i.e. labelled Pauli-X gate) would be attached to. The documentation does not have a function nor example that informs me of how to go about doing this. The picture below outlines that I am trying to find qubit n0 from quantum gate U0.
Example quantum circuit
The easiest way to do this would probably be to access the data attribute of your QuantumCircuit object (e.g. circuit.data if your circuit object is named circuit). This will be a list of tuples with the instruction objects (ie the gate instance), the quantum bit arguments for that instruction, and the classical bit arguments for the instruction: (instruction, qargs, cargs). For your example circuit it is simple because there is one only one gate so it'll be the first element in that list. So for that case you can do something like u0_qubits = circuit.data[0][1] and u0_qubits will be a list of the Qubit objects. Trying to do this for larger circuits with possible duplicate gates will obviously be more involved.

SceneKit – access destination color in shader modifier for blending

Is there a way to access last fragment color (destination color) in Metal shader modifier similar to gl_LastFragData in GLES?
My goal is to perform custom blending using shader modifiers (SceneKit's SCNBlendModes do not suffice in my situation). Currently I'm using SCNTechnique with 3 passes (render the destination, render the source, combine) to achieve this and that seems like a major overkill to me + it is really hard to have several blending groups without introducing new passes.
SCNProgram does not seem like an option for several reasons (I'm using PBR, tessellation/subdivision; I'd rather stick with using techniques for now I guess).
I've tried using #extension GL_EXT_shader_framebuffer_fetch : require as suggested in this answer, but it doesn't work even for GLSL shader modifiers (I'm using Xcode 9.0 and iOS 11).
I've also stumbled upon this wonderful gist that has SceneKit's default metal shader implementation, but it seems that blending is not performed there. Which makes me wonder if that is the reason why I can't find any destination color reference: blending happens somewhere else.
Is SCNProgram is the only way besides the SCNTechnique atrocity?
P.S:
The only mention of gl_LastFragData in the context of Metal that I've found is in chapter 4.8 Programmable Blending of Metal Shading Language Specification which would be helpful if I could somehow access the [[color(0)]] or something similar in shader modifier (if that's even possible).
I just wanted to check that that you hadn't overlooked the fragment entry point?
In the documentation it says: "Use this entry point to change the color of a fragment after all other shading has been performed."
I'm not sure if this is exactly what you mean by accessing the "last fragment color" but thought it might be worth mentioning.
https://developer.apple.com/documentation/scenekit/scnshadermodifierentrypoint/1523342-fragment

AI Minesweeper project

I need to implement Minesweeper solver. I have started to implement rule based agent.
I have implemented certain rules. I have a heuristic function for choosing best matching rule for current cell (with info about surrounding cells) being treated. So for each chosen cell it can decide for 8 surroundings cells to open them, to mark them or to do nothing. I mean. at the moment, the agent gets as an input some revealed cell and decides what to do with surrounding cells (at the moment, the agent do not know, how to decide which cell to treat).
My question is, what algorithm to implement for deciding which cell to treat?
Suppose, for, the first move, the agent will reveal a corner cell (or some other, according to some rule for the first move). What to do after that?
I understand that I need to implement some kind of search. I know many search algorithms (BFS, DFS, A-STAR and others), that is not the problem, I just do not understand how can I use here these searches.
I need to implement it in a principles of Artificial Intelligence: A modern approach.
BFS, DFS, and A* are probably not appropriate here. Those algorithms are good if you are trying to plan out a course of action when you have complete knowledge of the world. In Minesweeper, you don't have such knowledge.
Instead, I would suggest trying to use some of the logical inference techniques from Section III of the book, particularly using SAT or the techniques from Chapter 10. This will let you draw conclusions about where the mines are using facts like "one of the following eight squares is a mine, and exactly two of the following eight squares is a mine." Doing this at each step will help you identify where the mines are, or realize that you must guess before continuing.
Hope this helps!
I ported this (with a bit of help). Here is the link to it working: http://robertleeplummerjr.github.io/smartSweepers.js/ . Here is the project: https://github.com/robertleeplummerjr/smartSweepers.js
Have fun!

How could we get a variable value from GLSL?

I'm doing a project with a lot of calculation and i got an idea is throw pieces of work to GPU, but i wonder whether could we retrieve results from GLSL, if it is posible, how?
GLSL does not provide outputs besides what is placed in the frame buffer.
To program a GPU and get results more conveniently, use CUDA (NVidia only) or OpenCL (cross-platform).
In general, what you want to do is use OpenCL for general-purpose GPU tasks. However, if you are insistent about pretending that OpenGL is not a rendering API...
Framebuffer Objects make it relatively easy to render to multiple outputs. This of course means that you have to structure your processing such that what gets rendered matches what you want. You can render to 32-bit floating-point "images", so you have access to plenty of precision. The biggest difficulty is what I stated: figuring out how to structure your task to match rendering.
It's a bit easier when using transform feedback. This is the ability to write the output of the vertex (or geometry) shader processing to a buffer object. This still requires structuring your tasks into something like rendering, but it's easier because vertex shaders have a strict one-vertex-to-one-vertex mapping. For every input vertex, there is exactly one output. And if you draw GL_POINTS, it's not too difficult to use attributes to pass the data that changes.
Both easier and harder is the use of shader_image_load_store. This is effectively the ability to read/write from/to arbitrary images "whenever you want". I put that last part in quotes because there are lots of esoteric rules about data race conditions: reading from a value written by another shader invocation and so forth. These are not trivial to deal with. You can try to structure your code to avoid them, by not writing to the same image location in the same shader. But in many cases, if you could do that, you could just render to the framebuffer.
Ultimately, it's pretty much impossible to answer this question in the general case, without knowing what exactly you're trying to actually do. How you approach GPGPU through a rendering API depends greatly on exactly what you're trying to compute.

How do I detect whether the sample supplied by VideoSink.OnSample() is right-side up?

We're currently using the Silverlight VideoSink to capture video from users' local webcams, kinda like so:
protected override void OnSample(long sampleTime, long frameDuration, byte[] sampleData)
{
if (FrameShouldBeSubmitted())
{
byte[] resampledData = ResizeFrame(sampleData);
mediaController.SetVideoFrame(resampledData);
}
}
Now, on most of the machines that we've tested, the video sample provided in the byte[] sampleData parameter is upside-down, i.e., if you try to take the RGBA data and turn it into, say, a WriteableBitmap, the bitmap will be upside-down. That's odd, but fairly easy to correct, of course -- you just have to reverse the array as you encode it.
The problem is that at least on some machines (e.g., the single Macintosh in our test environment), the video sample provided is no longer upside-down, but right-side up, and hence, flipping the image actually results in an image that's received upside-down on the far side.
I reported this to MS as a bug, but their (terse) response was that it was "As Designed". Further attempts at clarification have so far been ignored.
Now, I'll grant that it's kinda entertaining to imagine the discussions behind this design decision: "OK, just to make it interesting, let's play the video rightside up on a Mac, but let's turn it upside down for Windows!" "Great idea!" "Yeah, that'll keep those developers guessing!" But beyond that, I can't find this, umm, "feature" documented anywhere, nor can I find any documentation on how one is supposed to be able to tell that a given video sample is upside down or rightside up. Any thoughts on how to tell this?
EDIT 3/29/10 4:50 pm - I got a response from MS which said that the appropriate way to tell was through the Stride property on the VideoFormat object, i.e., if the stride value is negative, the image will be upside-down. However, my own testing indicates that unless I'm doing something wrong, this isn't the case. At least on my own machine, whether the stride value is zero or negative (the only options I see), the sampled image is still upside-down.
I was going to suggest looking at VideoFormat.Stride provided at VideoSink.OnFormatChange but then I noticed your edit. I went ahead and tested it at my dev machine, image is upside down and stride is negative as expected. Have you checked again recently?
Even though stride made perfect sense for native applications (using stride at pointer operations), I agree that current behavior is not what you expect from a modern API. However performance wise, it is better not to make changes on data received from native API.
Yet at this point, while we are talking about performance, why not provide samples in formats other than PixelFormatType.Format32bppArgb so that we can avoid color space conversion? BTW, there is a VideoCaptureDevice.DesiredFormat property which only works for resolution as there is no alternative to PixelFormatType.Format32bppArgb.

Resources