Running Tensorflow session produces an empty tensor - c

I am trying to use the Tensorflow C API to run a session with the Deeplab graph. The frozen graph of Deeplab, pre-trained on Cityscapes, was downloaded from here:
http://download.tensorflow.org/models/deeplabv3_mnv2_cityscapes_train_2018_02_05.tar.gz
When I run with python, I get this segmentation output:
By printing out all of the graph's tensors via the python line: tensors = [n.values() for n in tf.get_default_graph().get_operations()]
, I found out that the dimensions of the input tensor are {1,?,?,3}, and the output tensor are {1,?,?}, and the data types of the input and output tensors are uint8 and int64, respectively. I used this information to write a C++ method to run the graph session:
int Deeplab::run_segmentation(image_t* img, segmap_t* seg) {
using namespace std;
// Allocate the input tensor
TF_Tensor* const input = TF_NewTensor(TF_UINT8, img->dims, 4, img->data_ptr, img->bytes, &free_tensor, NULL);
TF_Operation* oper_in = TF_GraphOperationByName(graph, "ImageTensor");
const TF_Output oper_in_ = {oper_in, 0};
// Allocate the output tensor
TF_Tensor* output = TF_NewTensor(TF_INT64, seg->dims, 3, seg->data_ptr, seg->bytes, &free_tensor, NULL);
TF_Operation* oper_out = TF_GraphOperationByName(graph, "SemanticPredictions");
const TF_Output oper_out_ = {oper_out, 0};
// Run the session on the input tensor
TF_SessionRun(session, nullptr, &oper_in_, &input, 1, &oper_out_, &output, 1, nullptr, 0, nullptr, status);
return TF_GetCode(status); // https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/tf_status.h#L42
}
Where the argument types image_t and segmap_t contain the parameters needed to call TF_NewTensor. They simply hold the pointers to the allocated buffer for the input/output tensors, the dimensions of the tensors, and the size in bytes:
typedef struct segmap {
const int64_t* dims;
size_t bytes;
int64_t* data_ptr;
} segmap_t;
typedef struct image {
const int64_t* dims;
size_t bytes;
uint8_t* data_ptr;
} image_t;
Then, I used OpenCV to fill an array with the street scene image (same one as above), and passed the image_t and segmap_t structs into the session run method :
// Allocate input image object
const int64_t dims_in[4] = {1, new_size.width, new_size.height, 3};
image_t* img_in = (image_t*)malloc(sizeof(image_t));
img_in->dims = &dims_in[0];
//img_in->data_ptr = (uint8_t*)malloc(new_size.width*new_size.height*3);
img_in->data_ptr = resized_image.data;
img_in->bytes = new_size.width*new_size.height*3*sizeof(uint8_t);
// Allocate output segmentation map object
const int64_t dims_out[3] = {1, new_size.width, new_size.height};
segmap_t* seg_out = (segmap_t*)malloc(sizeof(segmap_t));
seg_out->dims = &dims_out[0];
seg_out->data_ptr = (int64_t*)calloc(new_size.width*new_size.height, sizeof(int64_t));
seg_out->bytes = new_size.width*new_size.height*sizeof(int64_t);
But the resulting tensor (set_out->data_ptr) consisted of all 0s. The graph seemed to execute for about 5 seconds, the same amount of time as the working python implementation. Somehow, the graph is failing to dump the output tensor data in the buffer I allocated. What am I doing wrong?

There were two mistakes:
First, Deeplab's input tensor dimensions are {1, height, width, 3}, and the output tensor dimensions are {1, height, width}. So I had to swap height and width.
Also, for some reason, you have to fetch the data from the tensor with the TF_TensorData method. Creating the output tensor by doing TF_NewTensor(..., data_ptr, ...), then running TF_SessionRun, and finally accessing data_ptr does not work. You have to instead create the output tensor by calling TF_AllocateTensor(...), run TF_SessionRun, and access the tensor data with TF_TensorData(&tensor).

Related

nanopb (Protocol Buffers library) repeated sub-messages encode

we are using the nanopb library as our Protocol Buffers library. We defined the following messages:
simple.proto:
syntax = "proto2";
message repField {
required float x = 1;
required float y = 2;
required float z = 3;
}
message SimpleMessage {
required float lucky_number = 1;
repeated repField vector = 2;
}
with simple.options
SimpleMessage.vector max_count:300
So we know the repField has a fixed size of 300 and thus defining it as such.
Parts of the generated one looks like:
simple.pb.c:
const pb_field_t repField_fields[4] = {
PB_FIELD( 1, FLOAT , REQUIRED, STATIC , FIRST, repField, x, x, 0),
PB_FIELD( 2, FLOAT , REQUIRED, STATIC , OTHER, repField, y, x, 0),
PB_FIELD( 3, FLOAT , REQUIRED, STATIC , OTHER, repField, z, y, 0),
PB_LAST_FIELD
};
const pb_field_t SimpleMessage_fields[3] = {
PB_FIELD( 1, FLOAT , REQUIRED, STATIC , FIRST, SimpleMessage, lucky_number, lucky_number, 0),
PB_FIELD( 2, MESSAGE , REPEATED, STATIC , OTHER, SimpleMessage, vector, lucky_number, &repField_fields),
PB_LAST_FIELD
};
and part of simple.pb.h:
/* Struct definitions */
typedef struct _repField {
float x;
float y;
float z;
/* ##protoc_insertion_point(struct:repField) */
} repField;
typedef struct _SimpleMessage {
float lucky_number;
pb_size_t vector_count;
repField vector[300];
/* ##protoc_insertion_point(struct:SimpleMessage) */
} SimpleMessage;
We try to encode the message by doing:
// Init message
SimpleMessage message = SimpleMessage_init_zero;
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
// Fill in message
[...]
// Encode message
status = pb_encode(&stream, SimpleMessage_fields, &message);
// stream.bytes_written is wrong!
But the stream.bytes_written is wrong which means it is not encoded correctly, although status=1.
In the documentation for pb_encode() it says:
[...] However, submessages must be serialized twice: first to
calculate their size and then to actually write them to output. This
causes some constraints for callback fields, which must return the
same data on every call.
But, we are not sure how to interpret this sentence - what steps to follow exactly to achieve this.
So our question is:
What is the correct way to encode messages that contain fixed-size (repeated) submessages using the nanopb library?
Thank you!
You're not using callback fields here, so that quote doesn't matter for you. But if you were, it would just mean that in some situations your callback would be called multiple times.
Are you the same person as on the forum? Your stack overflow question does not show it, but the person on the forum has a similar problem that appears to be due to not setting vector_count. Then it will remain as 0 length array. So try adding:
message.vector_count = 300;
In the future, please wait a few days before posting the same question in multiple places. It's a waste of volunteer time to answer the same question multiple times.

Conduct AngleAxisToRotationMatirx on part of a double arrray in Ceres?

Nowadays I'm working with Ceres and Eigen. And I have a 6x3 = 18-d double array, let's call it xs, which is defined as:
double xs[6*3];
Basically xs contains the 6 rotations expressed in angle-axis format. And I need to turn each rotation of all 6 into rotation matrix format, then matrix multiplication will be conducted.
struct F1 {
template <typename T> bool operator()(const T* const xs,
T* residual) const {
Eigen::Map<const Eigen::Matrix<T,3,1> > m0(xs, 3);
T m[9], res[3];
ceres::AngleAxisToRotationMatrix(m0, m);
residual[0] = res[0];
residual[1] = res[1];
residual[2] = res[2];
}
Here in the example code I extract first 3 elements of xs via Eigen::Map, then I applied AngleAxisToRotationMatrix on it. But I keep receiving such errors:
error: no matching function for call to ‘AngleAxisToRotationMatrix(Eigen::Map<const Eigen::Matrix<ceres::Jet<double, 18>, 3, 1, 0, 3, 1>, 0, Eigen::Stride<0, 0> >&, ceres::Jet<double, 1> [9])’
Can somebody lend me a hand here? I'm pretty new to Ceres and Eigen, it really drove me almost to crazy.
Thanks!
ceres::AngleAxisToRotationMatrix expects raw pointers:
AngleAxisToRotationMatrix(xs, m);

Segmentation fault when using TF_SessionRun to run TensorFlow graph in C (not C++)

I'm trying to load and run a TensorFlow graph using the C API (I need to build outside of the TensorFlow project, and preferably without Bazel, so can't use C++).
The graph is a 3-layer LSTM-RNN which classifies feature vectors of 3 elements into one of 9 classes. The graph is built and trained in Python, and I've tested it in both Python and C++.
So far, I've got the graph loading, however I'm having trouble running the session once the graph is loaded. I've done a fair bit of digging around, but I've only found one example using the C API (here), and that doesn't include running the graph.
I've managed to put together the following, but it produces a segmentation fault (I can successfully run the code if I comment out the TF_SessionRun() call, but I get the seg fault when TF_SessionRun() is included). Here's the code:
#include "tensorflow/c/c_api.h"
#include <stdio.h>
#include <stdlib.h>
#include <memory.h>
#include <string.h>
#include <assert.h>
#include <vector>
#include <algorithm>
#include <iterator>
TF_Buffer* read_file(const char* file);
void free_buffer(void* data, size_t length) {
free(data);
}
static void Deallocator(void* data, size_t length, void* arg) {
free(data);
}
int main() {
// Use read_file to get graph_def as TF_Buffer*
TF_Buffer* graph_def = read_file("tensorflow_model/constant_graph_weights.pb");
TF_Graph* graph = TF_NewGraph();
// Import graph_def into graph
TF_Status* status = TF_NewStatus();
TF_ImportGraphDefOptions* graph_opts = TF_NewImportGraphDefOptions();
TF_GraphImportGraphDef(graph, graph_def, graph_opts, status);
if (TF_GetCode(status) != TF_OK) {
fprintf(stderr, "ERROR: Unable to import graph %s", TF_Message(status));
return 1;
}
else {
fprintf(stdout, "Successfully imported graph\n");
}
// Configure input & provide dummy values
const int num_bytes = 3 * sizeof(float);
const int num_bytes_out = 9 * sizeof(int);
int64_t dims[] = {3};
int64_t out_dims[] = {9};
float values[3] = {-1.04585315e+03, 1.25702492e+02, 1.11165466e+02};
// Setup graph inputs
std::vector<TF_Tensor*> input_values;
TF_Operation* input_op = TF_GraphOperationByName(graph, "lstm_1_input");
TF_Output inputs = {input_op, 0};
TF_Tensor* input = TF_NewTensor(TF_FLOAT, dims, 1, &values, num_bytes, &Deallocator, 0);
input_values.push_back(input);
// Setup graph outputs
TF_Operation* output_op = TF_GraphOperationByName(graph, "output_node0");
TF_Output outputs = {output_op, 0};
std::vector<TF_Tensor*> output_values(9, nullptr);
// Run graph
fprintf(stdout, "Running session...\n");
TF_SessionOptions* sess_opts = TF_NewSessionOptions();
TF_Session* session = TF_NewSession(graph, sess_opts, status);
assert(TF_GetCode(status) == TF_OK);
TF_SessionRun(session, nullptr,
&inputs, &input_values[0], 3,
&outputs, &output_values[0], 9,
nullptr, 0, nullptr, status);
fprintf(stdout, "Successfully run session\n");
TF_CloseSession(session, status);
TF_DeleteSession(session, status);
TF_DeleteSessionOptions(sess_opts);
TF_DeleteImportGraphDefOptions(graph_opts);
TF_DeleteGraph(graph);
TF_DeleteStatus(status);
return 0;
}
TF_Buffer* read_file(const char* file) {
FILE *f = fopen(file, "rb");
fseek(f, 0, SEEK_END);
long fsize = ftell(f);
fseek(f, 0, SEEK_SET);
void* data = malloc(fsize);
fread(data, fsize, 1, f);
fclose(f);
TF_Buffer* buf = TF_NewBuffer();
buf->data = data;
buf->length = fsize;
buf->data_deallocator = free_buffer;
return buf;
}
I'm not sure exactly where I'm going wrong with TF_SessionRun, so any help would be greatly appreciated!
Update: I've set a break point at the TF_SessionRun call in gdb, and as I step through it, I first get:
Thread 1 received signal SIGSEGV, Segmentation fault.
0x0000000100097650 in ?? ()
followed by:
"Cannot find bounds of current function"
I initially thought this was as the TensorFlow library wasn't compiled with debug symbols, but have since compiled it with debug symbols and get the same output in gdb.
Since my original post I found a TensorFlow C example here (however the author points out that it's untested). As such, I've since re-written my code according to their example, and have double checked everything with TensorFlow's c_api.h header file. I'm also now calling the C API from a C++ file (as that's what's done in the above example). Despite all this, I'm still getting the same output from gdb.
Update 2: To ensure that my graph is loading properly, I've used some of the TF_Operation functions in the C API (TF_GraphNextOperation() and TF_OperationName()) to check the graph operations, and have compared these with the operations when loading the graph in Python. The output looks correct, and I can retrieve properties from the operations (e.g. using TF_OperationNumOutputs()), so it appears the graph is definitely loading correctly.
Advice from someone with experience using TensorFlow's C API would be greatly appreciated.
I managed to resolve the issue after more time trying out functions in the C api and paying close attention to the dimensionality of my placeholders. My original seg fault was caused by passing the wrong operation name string to TF_GraphOperationByName(), however the seg fault only occurred at TF_SeesionRun() as this was the first place it tried to access that operation. Here's how I resolved the issue, for anyone facing the same problem:
Firstly, check your operations to ensure that they're assigned correctly. in my case, the operation name I provided to input_op was incorrect due to an error when obtaining the operation names in Python. The incorrect op name I got from Python was 'lstm_4_input'. I found this was incorrect by running the following on the loaded graph with the C API:
n_ops = 700
for (int i=0; i<n_ops; i++)
{
size_t pos = i;
std::cout << "Input: " << TF_OperationName(TF_GraphNextOperation(graph, &pos)) << "\n";
}
Where n_ops is the number of operations in your graph. This will print out your operation names; in this case I could see there was no 'lstm_4_input', but there was an 'lstm_1_input', so I changed the value accordingly. Furthermore, it validated that my output operation, 'output_node0', was correct.
There were a few other issues that became clear once I resolved the seg fault, so here's the complete working code, with detailed comments, for anyone facing similar problems:
#include "tensorflow/c/c_api.h"
#include <stdio.h>
#include <stdlib.h>
#include <memory.h>
#include <string.h>
#include <assert.h>
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
TF_Buffer* read_file(const char* file);
void free_buffer(void* data, size_t length) {
free(data);
}
static void Deallocator(void* data, size_t length, void* arg) {
free(data);
// *reinterpret_cast<bool*>(arg) = true;
}
int main() {
// Use read_file to get graph_def as TF_Buffer*
TF_Buffer* graph_def = read_file("tensorflow_model/constant_graph_weights.pb");
TF_Graph* graph = TF_NewGraph();
// Import graph_def into graph
TF_Status* status = TF_NewStatus();
TF_ImportGraphDefOptions* graph_opts = TF_NewImportGraphDefOptions();
TF_GraphImportGraphDef(graph, graph_def, graph_opts, status);
if (TF_GetCode(status) != TF_OK) {
fprintf(stderr, "ERROR: Unable to import graph %s", TF_Message(status));
return 1;
}
else {
fprintf(stdout, "Successfully imported graph\n");
}
// Create variables to store the size of the input and output variables
const int num_bytes_in = 3 * sizeof(float);
const int num_bytes_out = 9 * sizeof(float);
// Set input dimensions - this should match the dimensionality of the input in
// the loaded graph, in this case it's three dimensional.
int64_t in_dims[] = {1, 1, 3};
int64_t out_dims[] = {1, 9};
// ######################
// Set up graph inputs
// ######################
// Create a variable containing your values, in this case the input is a
// 3-dimensional float
float values[3] = {-1.04585315e+03, 1.25702492e+02, 1.11165466e+02};
// Create vectors to store graph input operations and input tensors
std::vector<TF_Output> inputs;
std::vector<TF_Tensor*> input_values;
// Pass the graph and a string name of your input operation
// (make sure the operation name is correct)
TF_Operation* input_op = TF_GraphOperationByName(graph, "lstm_1_input");
TF_Output input_opout = {input_op, 0};
inputs.push_back(input_opout);
// Create the input tensor using the dimension (in_dims) and size (num_bytes_in)
// variables created earlier
TF_Tensor* input = TF_NewTensor(TF_FLOAT, in_dims, 3, values, num_bytes_in, &Deallocator, 0);
input_values.push_back(input);
// Optionally, you can check that your input_op and input tensors are correct
// by using some of the functions provided by the C API.
std::cout << "Input op info: " << TF_OperationNumOutputs(input_op) << "\n";
std::cout << "Input data info: " << TF_Dim(input, 0) << "\n";
// ######################
// Set up graph outputs (similar to setting up graph inputs)
// ######################
// Create vector to store graph output operations
std::vector<TF_Output> outputs;
TF_Operation* output_op = TF_GraphOperationByName(graph, "output_node0");
TF_Output output_opout = {output_op, 0};
outputs.push_back(output_opout);
// Create TF_Tensor* vector
std::vector<TF_Tensor*> output_values(outputs.size(), nullptr);
// Similar to creating the input tensor, however here we don't yet have the
// output values, so we use TF_AllocateTensor()
TF_Tensor* output_value = TF_AllocateTensor(TF_FLOAT, out_dims, 2, num_bytes_out);
output_values.push_back(output_value);
// As with inputs, check the values for the output operation and output tensor
std::cout << "Output: " << TF_OperationName(output_op) << "\n";
std::cout << "Output info: " << TF_Dim(output_value, 0) << "\n";
// ######################
// Run graph
// ######################
fprintf(stdout, "Running session...\n");
TF_SessionOptions* sess_opts = TF_NewSessionOptions();
TF_Session* session = TF_NewSession(graph, sess_opts, status);
assert(TF_GetCode(status) == TF_OK);
// Call TF_SessionRun
TF_SessionRun(session, nullptr,
&inputs[0], &input_values[0], inputs.size(),
&outputs[0], &output_values[0], outputs.size(),
nullptr, 0, nullptr, status);
// Assign the values from the output tensor to a variable and iterate over them
float* out_vals = static_cast<float*>(TF_TensorData(output_values[0]));
for (int i = 0; i < 9; ++i)
{
std::cout << "Output values info: " << *out_vals++ << "\n";
}
fprintf(stdout, "Successfully run session\n");
// Delete variables
TF_CloseSession(session, status);
TF_DeleteSession(session, status);
TF_DeleteSessionOptions(sess_opts);
TF_DeleteImportGraphDefOptions(graph_opts);
TF_DeleteGraph(graph);
TF_DeleteStatus(status);
return 0;
}
TF_Buffer* read_file(const char* file) {
FILE *f = fopen(file, "rb");
fseek(f, 0, SEEK_END);
long fsize = ftell(f);
fseek(f, 0, SEEK_SET); //same as rewind(f);
void* data = malloc(fsize);
fread(data, fsize, 1, f);
fclose(f);
TF_Buffer* buf = TF_NewBuffer();
buf->data = data;
buf->length = fsize;
buf->data_deallocator = free_buffer;
return buf;
}
Note: in my earlier attempt, I used '3' and '9' as the ninputs and noutputs arguments for TF_SessionRun(), thinking that these related to the length of my input and output tensors (I'm classifying 3-dimensional features into one of 9 classes). In fact, these are simple the number of input/output tensors, as the dimensionality of the tensors is handled earlier when they're instantiated. It's easy to just use the .size() member function here (when using std::vectors to hold the TF_Outputs).
Hopefully this makes sense and helps to clarify the process for anyone who finds themselves in a similar position in future!
You can execute your code with gdb with this syntax:
gdb executable_name
Like this your process will run in gdb so you can get the backtrace after it crashes. After the crash you will have a console inside gdb so you can use the command bt to see the backtrace. Hopefully that should give you enough information to debug the issue. If not, you could also add your backtrace to your original post so people can see it.
Might be a good idea to read up on break points in gdb.

openCV k-means call assertion failed

I'm have read c++ sample from samples folder of openCV source distribution, and, if omit random picture generation, kmeans call looks pretty simple – author even doesn't allocate centers/labels arrays (you can find it here). However, I can't do the same in C. If I don't allocate labels, I get assertion error:
OpenCV Error: Assertion failed (labels.isContinuous() && labels.type()
== CV_32S && (labels.cols == 1 || labels.rows == 1) && labels.cols + labels.rows - 1 == data.rows) in cvKMeans2, file
/tmp/opencv-xiht/opencv-2.4.9/modules/core/src/matrix.cpp, line 3094
Ok, I tried to create empty labels matrix, but assertion message don't changes at all.
IplImage* image = cvLoadImage("test.jpg", -1);
IplImage* normal = cvCreateImage(cvGetSize(image), IPL_DEPTH_32F, image->nChannels);
cvConvertScale(image, normal, 1/255.0, 0);
CvMat* points = cvCreateMat(image->width, image->height, CV_32F);
points->data.fl = normal->imageData;
CvMat* labels = cvCreateMat(1, points->cols, CV_32S);
CvMat* centers = NULL;
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 10, 1.0);
// KMEANS_PP_CENTERS is undefined
int KMEANS_PP_CENTERS = 2;
cvKMeans2(points, 4, labels, criteria, 3, NULL, KMEANS_PP_CENTERS, centers, 0);
The thing that drives me nuts:
CvMat* labels = cvCreateMat(1, points->cols, CV_32S);
int good = labels->type == CV_32S; // FALSE here
It's obviously one (not sure if the only) issue that causes assertion fail. How this supposed to work? I can't use С++ API since whole application is in plain C.
the assertion tells you:
type must be CV_32S which seems to be the case in your code, maybe your if-statement is false because the type is changed to CV_32SC1 automatically? no idea...
you can either place each point in a row or in a column, so rows/cols is set to 1 and the other dimension must be set to data.rows which indicates that data holds the points you want to cluster in the format that each point is placed in a row, leading to #points rows. So your error seems to be CvMat* labels = cvCreateMat(1, points->cols, CV_32S); which should be CvMat* labels = cvCreateMat(1, points->rows, CV_32S); instead, to make the assertion go away, but your use of points seems to be conceptually wrong.
You probably have to hold your points (you want to cluster) in a cvMat with n rows and 2 cols of type CV_32FC1 or 1 col and type CV_32FC2 (maybe both versions work, maybe only one, or maybe I'm wrong there at all).
edit: I've written a short code snippet that works for me:
// here create the data array where your input points will be hold:
CvMat* points = cvCreateMat( numberOfPoints , 2 /* 2D points*/ , CV_32F);
// this is a float array of the
float* pointsDataPtr = points->data.fl;
// fill the mat:
for(unsigned int r=0; r<samples.size(); ++r)
{
pointsDataPtr[2*r] = samples.at(r).x; // this is the x coordinate of your r-th point
pointsDataPtr[2*r+1] = samples.at(r).y; // this is the y coordinate of your r-th point
}
// this is the data array for the labels, which will be the output of the method.
CvMat* labels = cvCreateMat(1, points->rows, CV_32S);
// this is the quit criteria, which I did neither check nor modify, just used your version here.
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 10, 1.0);
// call the method for 2 cluster
cvKMeans2(points, 2, labels, criteria);
// now labels holds numberOfPoints labels which have either value 0 or 1 since we searched for 2 cluster
int* labelData = labels->data.i; // array to the labels
for(unsigned int r=0; r<samples.size(); ++r)
{
int labelOfPointR = labelData[r]; // this is the value of the label of point number r
// here I use c++ API to draw the points, do whatever else you want to do with the label information (in C API). I choose different color for different labels.
cv::Scalar outputColor;
switch(labelOfPointR)
{
case 0: outputColor = cv::Scalar(0,255,0); break;
case 1: outputColor = cv::Scalar(0,0,255); break;
default: outputColor = cv::Scalar(255,0,255); break; // this should never happen for 2 clusters...
}
cv::circle(outputMat, samples.at(r), 2, outputColor);
}
giving me this result for some generated point data:
Maybe you need the centers too, the C API gives you the option to return them, but didnt check how it works.

CoreGraphics: Encode RGBA data to PNG

I am trying to use the C interface of CoreGraphics & CoreFoundation to save a buffer of 32-bit RGBA data (as a void*) to a PNG file. When I try to finialize the CGImageDestinationRef, the following error message is printed to the console:
libpng error: No IDATs written into file
As far as I can tell, the CGImageRef I'm adding to the CGImageDestinationRef is valid.
Relavent Code:
void saveImage(const char* szImage, void* data, size_t dataSize, size_t width, size_t height)
{
CFStringRef name = CFStringCreateWithCString(NULL, szImage, kCFStringEncodingASCII);
CFURLRef texture_url = CFURLCreateWithFileSystemPath(
NULL,
name,
kCFURLPOSIXPathStyle,
false);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, dataSize, NULL);
CGImageRef image = CGImageCreate(width, height, 8, 32, 32 * width, colorSpace,
kCGImageAlphaLast | kCGBitmapByteOrderDefault, dataProvider,
NULL, FALSE, kCGRenderingIntentDefault);
// From Image I/O Programming Guide, "Working with Image Destinations"
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef type = CFStringCreateWithCString(NULL, "public.png", kCFStringEncodingASCII);
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(texture_url, type, 1, myOptions);
CGImageDestinationAddImage(dest, image, NULL);
if (!CGImageDestinationFinalize(dest))
{
// ERROR!
}
CFRelease(image);
CFRelease(colorSpace);
CFRelease(dataProvider);
CFRelease(dest);
CFRelease(texture_url);
}
This post is similar, except I'm not using the Objective C interface: Saving a 32 bit RGBA buffer into a .png file (Cocoa OSX)
Answering my own questions:
In addition to the issues pointed out by NSGod, the IDAT issue was an invalid parameter to CGImageCreate(): parameter 5 is bytesPerRow, not bitsPerRow. So 32 * width was incorrect; 4 * width is correct.
Despite what this page of the official documentation lists, UTCoreTypes.h is located in the CoreServices.framework for MacOSX, not MobileCoreServices.framework.
There are numerous issues with your code.
Here it is rewritten how I would do it:
void saveImage(const char* szImage, void* data, size_t dataSize, size_t width, size_t height)
{
CFStringRef name = CFStringCreateWithCString(NULL, szImage, kCFStringEncodingUTF8);
CFURLRef texture_url = CFURLCreateWithFileSystemPath(
NULL,
name,
kCFURLPOSIXPathStyle,
false);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data,
dataSize, NULL);
CGImageRef image = CGImageCreate(width, height, 8, 32, 32 * width, colorSpace,
kCGImageAlphaLast | kCGBitmapByteOrderDefault,
dataProvider, NULL, FALSE, kCGRenderingIntentDefault);
// From Image I/O Programming Guide, "Working with Image Destinations"
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate(NULL, (const void **)myKeys,
(const void **)myValues, 3, &kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CGImageDestinationRef dest =
CGImageDestinationCreateWithURL(texture_url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(dest, image, NULL);
CGImageDestinationSetProperties(dest, myOptions);
if (!CGImageDestinationFinalize(dest))
{
// ERROR!
}
}
First, never use ASCII when dealing with file system paths, use UTF8. Second, you were constructing a dictionary to be used to set the properties of the image, but you were using it with the wrong function. The documentation for CGImageDestinationCreateWithURL() says the following:
CGImageDestinationCreateWithURL
Creates an image destination that writes to a location specified by a
URL.
CGImageDestinationRef CGImageDestinationCreateWithURL (
CFURLRef url,
CFStringRef type,
size_t count,
CFDictionaryRef options
);
Parameters
options - Reserved for future use. Pass NULL.
You were trying to pass a dictionary of properties when you were supposed to pass NULL. (Also, you can simply use the kUTTypePNG Uniform Type Identifier string constant instead of re-creating it). First call CGImageDestinationCreateWithURL(), then call CGImageDestinationAddImage() to add the image, then call CGImageDestinationSetProperties() and pass in the dictionary of properties you created.
[UPDATE]: If after these changes you're still having libpng error: No IDATs written into file issues, try the following: First, make sure that dataProvider is non-NULL-- in other words, make sure the CGDataProviderCreateWithData() function succeeded. Second, if dataProvider is valid, perhaps try changing the options from kCGImageAlphaLast | kCGBitmapByteOrderDefault to simply kCGImageAlphaPremultipliedLast and see if it succeeds.

Resources