I am writing an extension function in C for PostgreSQL.
I can find lots of examples online but nothing that explicitly shows how to actually write data to a table in an extension function?
Where do I need to look to find the right functionality/documentation for writing a record to an existing table as a C extension?
I should've googled a bit longer before posting.
It seems that SPI fits my needs exactly
http://www.postgresql.org/docs/current/static/spi.html
Related
I know few about this and i'm trying to keep building upon it. My goal is to do image stacking with some criteria using C language, as i came upon some cool ideas i think i should be capable of doing with my photos. My C background should be enough to understand what i may need. That being said...
So far i've learned how to read an existing .TIFF file and save it into a char array. The problem is i don't know in which way its data is contained so that i can then be able to analize individual pixels and modify them, or build another .TIFF file from data i previously read.
I've read some things about (a so called) libtiff.h which may be usefull but i can't find where to get it, neither how to install it.
Does anyone know how a .TIFF file data is stored so that i can read it and apply changes to it?
Also,
Does anyone have any experience with handling image files and editing in C? Where did you learn it from?
Do you know of any place i could search for information/tutorials?
Any help will be very usefull,
Thanks in advance.
You can do an enormous amount of very sophisticated processing on TIFFs, or any one of 190+ other formats with ImageMagick without any need to understand TIFF format or write any C. Try searching on Stack Overflow for [imagemagick]
If you want to do processing yourself, consider https://cimg.eu
Another option might be to convert your TIFFs to NetPBM which is much, much simpler to read and write in C. That would be as follows with ImageMagick:
magick INPUT.TIFF -compress none OUTPUT.PPM
So I've run into an interesting design pattern and I wanted to know if you guys had an opinion on it.
Basically, the design is passing everything around as a pre-serialized type. There is no "types" for the returns, for example. It is passed as a simple uint8_t*. There is a defined header that "tells" you what is in the buffer, how big it is, what the version of the buffer is, ect. I call it "pre-serialized" because it forces flattening of all structures.
The pros:
You can easily write it (or even a set of it) to what ever you want. Files, IO, whatever.
Can store arbitrary data.
The Cons: IMHO:
No type safety is going to be a nightmare
The programmer has to parse the code. Even if there is an enumerated type, the user would have to know what that type means. Even if there are functions to parse the type, the programmer has to know that is the function to call.
Version hell: changing code will cause a ripple effect of errors. Because everywhere is parsing it differently, you have no idea where the code works or where it is broken.
It is viral: because it is flat, you can't "insert" the header on the end of outside data. You could wrap the call if you copy your "data", but this could cause an unnecessary copy that would be SLOW. So either your code is slower than it needs to be, or you conform to this data structure.
It isn't human readable OR debug-able.
Have you seen this design pattern before? Is there a name for this design pattern? Things I missed?
Is there a name for this design pattern?
Well, Legacy Code? :) I have seen such design in 30 years old Cobol systems...
The pros you have stated are easily reachable also by using XML format (or JSON):
You can easily write it (or even a set of it) to what ever you want. Files, IO, whatever - most of all, web services!
Can store arbitrary data.
Furthermore, all your cons are eliminated.
The only pro I can see in your solution is conciseness - when every byte counts and you need to avoid any overhead as too expensive, then this is nice.
Added: Cobol has a feature to easily define the structure of such serialized data, see PICTURE clause. Reading the data is very easy then, you read them as variables. (Like if you have a binary data and define a struct in the C language and typecast the binary to the struct.)
As Honza said this would be normal in Legacy Cobol/PL1 (was there a Cobol/PL1 conversion or interface to COBOL programs ???).
In COBOL this design pattern would make sense, not sure about C though (one of the binary serialization packages or JSON etc might be more sensible).
In Cobol, you would have a Cobol copybook which all programs would use and could edit the data using the Cobol Copybook (with something like file-aid or Microfocus Data Editor).
Why use this "design pattern" in Cobol:
Regression testing of Modules; you can write a driver module like
Read Test-data-file
while more-data
Call Module
write Result to output-file
Read Test-data-file
end
You can then do a compare between Output from the
re-Change Program to the changed program.
Testing - some times you can use a "production file" in testing
A file provides trace or snapshot of what is going on, this can be very useful.
Easy to reorganize Batch streams:
Split a programs up (and pass the data via file). There variety of reason for doing this including
program has gotten to big and is hard to maintain.
Sorting the data
Performance (use a file rather than hitting the DB multiple times)
new uses for extracted data
While your cons are valid for C, they will be less of an issue in Cobol.
The key to using this "design pattern" is being able to edit/view/compare the format. If you can not edit/view/compare a file, I do not see the point
Am I correct in assuming that an obscure file format loader's c level source/abstraction that closely corresponds to a hex dump of the original file can also be used to make the said file format construction source code from scratch in what seems to be something like bootstrapping?
In general, no. There usually are auxiliary resources that do not need to be written out, but still have to be reconstructed by the loading function. It's hard to say anything more without knowing your specific situation.
In my quest to learn C (Plain C, not C#, nor C++. I have my reasons.), I have come across the need to extract some information from a HTML document, fetched from a URL. Namely, I want all href attributes from the links residing in a certain unordered list on the page, in an array of strings. These URLs point at images I want to download and store in a zip file.
Now, I've asked a few people I know are good at C, and they have either told me off with "C is the wrong tool", or pointed me at libXML, which is apparently famous for it's scarce documentation. I've also looked at libsoup and libtidy, but I can't seem to stitch the pieces together.
What approach/library should I pick? Does anyone know of some example code I could look at?
EDIT: Seeing that half the comments are telling me to use something other than C, I'll add that I'm not looking for the "right tool for the job". I'd probably use Ruby if I just wanted to get it done ASAP, simply because I'm comfortable with it. It's part of my quest to learn C, and as such, I'm looking for a pure C solution.
Since you are on a quest to learn C, then I would use the standard library and .
http://www.cplusplus.com/reference/clibrary/cstdio/
http://www.cplusplus.com/reference/clibrary/cstring/
The easiest is to use something else to get the page, write it to a local file, then pass the file name into your program. Print your output to STDOUT.
I am working on a Computed Tomography problem, in which I have to simulate the generation of the raw data or sinogram that a CT apparatus generates.
Matlab has an in-built function "radon()" to simulate the same. I have successfully written a custom code in Matlab to generate the sinogram (ie: without using radon() ).
I have converted this code into C, using the OpenCV library to handle the loading/display/saving of images.
The problem is that though my matlab output generates the sinogram as expected, my C code does not. I have merely translated the Matlab code into C but the C output is oriented differently as well has black strips in between. The gray levels in the C output kind of resemble the Sinogram gray levels and pattern generated by the matlab code. Only thing it appears segmented in C.(I will send the images across if you gimme your mail id since i cannot attach them here.)
Could someone help me out as to why this is happening? I have peer-reviewed my code and checked for type cast errors, memory allocations etc. But They all seem correct.
Does matlab handle data that differently than C? What could be the explanation for the tilt?
Please Help me out. Do let me know if you need any more clarification regarding the problem statement or need to see the algorithm.
Thanks!
Very hard to help with a question like this, when we don't see either code, the output, or the expected output.
Perhaps you can upload the images to some public image hosting, and add links from the question?
If you're doing trigonometric function calls (sin() and friends), I would pay extra attention to the arguments used, and also check if maybe Matlab is delivering more precision in the result, somehow. Of course, this is a stab in the dark since I'm not familiar with your domain.
Here are the images related to the doubt I asked in the original post
The expected output:
http://www.photoshop.com/users/pyridot/albums/a40e3f7326d942ff821fc00612e6b458/view#e027c2b94bfd4210870bc6c57b1f1a03
The C Output:
http://www.photoshop.com/users/pyridot/albums/a40e3f7326d942ff821fc00612e6b458/view#ff529abedb3e49aa8865276f2c2bc625