A std::tr1::shared_ptr for Objective C++ on iPhone? - shared-ptr

I am mostly a C++ developer, recently I am writing iPhone applications.
The memory management on iPhone is OK to me, due to resource limitation, it's encouraged to use reference counters rather than deep copy.
One annoying thing is I have to manage the reference counters by myself: alloc means counter = 1; retain means counter++, release means counter--
I wish to write a shared_ptr like class for Cocoa Touch, so I rarely have to manually manipulate the reference counters by myself.
I doubt if there's any existing code for that, and I'd like to hear some advices, today is the 5th day since I started to learn objective c
Thanks.

As long as you learn the memory management rules first, there is no real problem with shared_ptr - it can help you in C++ contexts but doesn't let the ownership questions magically disappear.
shared_ptr supports a custom deallocator so the following:
#interface A : NSObject
- (void)f;
#end
#implementation A
- (void)dealloc { NSLog(#"bye"); [super dealloc]; }
- (void)f { NSLog(#"moo"); }
#end
void my_dealloc(id p) {
[p release];
}
// ...
{
shared_ptr<A> p([[A alloc] init], my_dealloc);
[p.get() f];
}
... outputs:
moo
bye
... as expected.
If you want you can hide the deallocator from the user using a helper function, e.g.:
template<class T> shared_ptr<T> make_objc_ptr(T* t) {
return shared_ptr<T>(t, my_dealloc);
}
shared_ptr<A> p = make_objc_ptr<A>([[A alloc] init]);

You forgot case 4
[4] you need to pass a pointer to an object out of a method as the return value.
This is where you need -autorelease.
I suggest you read the memory management rules and write some real code before you attempt this little project so that you can get a feel of how memory management is supposed to work.

Automatic reference counting, coming in iOS 5, will effectively make any pointer to an objective-c object act like a smart pointer. Retain/release calls will be synthesized by the compiler on assign and deallocation, unless you explicitly declare a reference to be weak, in which case they'll be automatically zeroed out when the object is deallocated.
My advice is to wait a couple of months for that. You might be able to put together something similar in the meantime, but I'd recommend against it. For one thing, it'll be ugly. Example:
smart_ptr<id> array = make_smart_ptr( [NSMutableArray array] );
NSUInteger count = [array count]; // won't work.
count = [array.get() count]; // works, but yuck.
[array.get() setArray: anotherArray.get()]; // even more yuck.
Also, if your headers are full of c++ classes, you'll have to compile your entire project in objective-c++, which may cause you problems as objective-c++ isn't 100% compatible with objective-c code, and not all third-party frameworks will work properly with it. And forget about sharing your code with anyone else.
It might be an interesting excercise to make something like this work, but you won't want to actually use it. And watch out for the temptation to recreate your favourite bits of C++ in Objective-C. The languages are very different, and you could spend a lot of time doing that, which is time not spent learning all the great stuff you can do in Objective-C that you can't do in C++.

Resource management in Cocoa can be tricky: some API calls automatically retain a reference and some don't, some return an autoreleased object, some a retained object. By shielding this in a shared_ptr class, you'll more likely to make mistakes. My advice is to first take the "normal" Cocoa route until you're fairly experienced.

Have you looked into [object autorelease]? Perhaps that would make things a bit easier.

Related

Initializing private data in custom Gtk+ widget which depends on parent's members (C)

I'm working on a pet project solely for the purpose of learning a few API's. It's not intended to have practical value, but rather to be relatively simple excercise to get me comfortable with libpcap, gtk+, and cairo before I use them for anything serious. This is a graphical program, implemented in C and using Gtk+ 2.x. It's eventually going to read frames with pcap (currently I just have a hardcoded test frame), then use cairo to generate pretty pictures using color values generated from the raw packet (at this stage, I'm just using cairo_show_text to print a text representation of the frame or packet). The pictures will then be drawn to a custom widget inheriting from GtkDrawingArea.
My first step, of course, is to get a decent grasp of the Gtk+ runtime environment so I can implement my widget. I've already managed to render and draw text using cairo to my custom widget. Now I'm at the point where I think the widget really needs private storage for things like the cairo_t context pointer and a GdkRegion pointer (I had not planned to use Gdk directly, but my research indicates that it may be necessary in order to call gdk_window_invalidate_region() to force my DrawingArea to refresh once I've drawn a frame, not to mention gdk_cairo_create()). I've set up private storage as a global variable (the horror! Apparently this is conventional for Gtk+. I'm still not sure how this will even work if I have multiple instances of my widget, so maybe I'm not doing this part right. Or maybe the preprocessor macros and runtime environment are doing some magic to give each instance its own copy of this struct?):
/* private data */
typedef struct _CandyDrawPanePrivate CandyDrawPanePrivate;
struct _CandyDrawPanePrivate {
cairo_t *cr;
GdkRegion *region;
};
#define CANDY_DRAW_PANE_GET_PRIVATE(obj)\
(G_TYPE_INSTANCE_GET_PRIVATE((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPanePrivate))
Here's my question: Initializing the pointers in my private data struct depends on members inherited from the parent, GtkWidget:
/* instance initializer */
static void candy_draw_pane_init(CandyDrawPane *pane) {
GdkWindow *win = NULL;
/*win = gtk_widget_get_window((GtkWidget *)pane);*/
win = ((GtkWidget*)pane)->window;
if (!win)
return;
/* TODO: I should probably also check this return value */
CandyDrawPanePrivate *priv = CANDY_DRAW_PANE_GET_PRIVATE(((CandyDrawPane*)pane));
priv->cr = gdk_cairo_create(win);
priv->region = gdk_drawable_get_clip_region(win);
candy_draw_pane_update(pane);
g_timeout_add(1000, candy_draw_pane_update, pane);
}
When I replaced my old code, which called gdk_cairo_create() and gdk_drawable_get_clip_region() during my event handlers, with this code, which calls them during candy_draw_pane_init(), the application would no longer draw. Stepping through with a debugger, I can see that pane->window and pane->parent are both NULL pointers while we are within candy_draw_pane_init(). The pointers are valid later, in the Gtk event processing loop. This leads me to believe that the inherited members have not yet been initialized when my derived class' "_init()" method is called. I'm sure this is just the nature of the Gtk+ runtime environment.
So how is this sort of thing typically handled? I could add logic to my event handlers to check priv->cr and priv->region for NULL, and call gdk_cairo_create() and gdk_drawable_get_clip_region() if they are still NULL. Or I could add a "post-init" method to my CandyDrawPane widget and call it explicitly after I call candy_draw_pane_new(). I'm sure lots of other people have encountered this sort of scenario, so is there a clean and conventional way to handle it?
This is my first real foray into object-oriented C, so please excuse me if I'm using any terminology incorrectly. I think one source of my confusion is that Gtk has separate concepts of instance and class initialization. C++ may do something similar "under the hood," but if so, it isn't as obvious to the coder.
I have a feeling that if this was C++, most of the the code that's going into candy_draw_pane_init() would be in the class constructor, and any secondary initialization that depended on the constructor having completed would go into an "Init()" method (which of course is not a feature of the language, but just a commonly used convention). Is there an analogous convention for Gtk+? Or perhaps someone can give a good overview of the flow of control when these widgets are instantiated. I have not been very impressed with the quality of the official Gnome documentation. Much of it is either too high-level, contains errors and typos in code, or has broken links or missing examples. And of course the heavy use of macros makes it a little harder to follow even my own code (in this respect it reminds me of Win32 GUI development). In short, I'm sure I can struggle through this on my own and make it work, but I'd like to hear from someone experienced with Gtk+ and C what the "right" way to do this is.
For completeness, here is the header where I set up my custom widget:
#ifndef __GTKCAIRO_H__
#define __GTKCAIRO_H__ 1
#include <gtk/gtk.h>
/* Following tutorial; see gtkcairo.c */
/* Not sure about naming convention; may need revisiting */
G_BEGIN_DECLS
#define CANDY_DRAW_PANE_TYPE (candy_draw_pane_get_type())
#define CANDY_DRAW_PANE(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPane))
#define CANDY_DRAW_PANE_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass)CANDY_DRAW_PANE_TYPE, CandyDrawPaneClass))
#define IS_CANDY_DRAW_PANE(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), CANDY_DRAW_PANE_TYPE))
#define IS_CANDY_DRAW_PANE_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), CANDY_DRAW_PANE_TYPE))
// official gtk tutorial, which seems to be of higher quality, does not use this.
// #define CANDY_DRAW_PANE_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPaneClass))
typedef struct {
GtkDrawingArea parent;
/* private */
} CandyDrawPane;
typedef struct {
GtkDrawingAreaClass parent_class;
} CandyDrawPaneClass;
/* method prototypes */
GtkWidget* candy_draw_pane_new(void);
GType candy_draw_pane_get_type(void);
void candy_draw_pane_clear(CandyDrawPane *cdp);
G_END_DECLS
#endif
Any insight is much appreciated. I do realize I could use a code-generating IDE and crank something out more quickly, and probably dodge having to deal with some of this stuff, but the whole point of this exercise is to get a good grasp of the Gtk runtime, so I'd prefer to write the boilerplate by hand.
This article, A Gentle Introduction to GObject Construction, may help you. Here are some tips that I thought of while looking at your code and your questions:
If your priv->cr and priv->region pointers have to change whenever the widget's GDK window changes, then you could also move that code into a signal handler for the notify::window signal. notify is a signal that fires whenever an object's property is changed, and you can narrow down the signal emission to listen to a specific property by appending it to the name of the signal like that.
You don't need to check the return value from the GET_PRIVATE macro. Looking at the source code for g_type_instance_get_private(), it can return NULL in the case of an error, but it's really unlikely, and will print warnings to the terminal. My feeling is that if GET_PRIVATE returns NULL then something has gone really wrong and you won't be able to recover and continue executing the program anyway.
You're not setting up private storage as a global variable. Where are you declaring this global variable? I only see a struct and typedef declaration at the global level. What you are most likely doing, and what is the usual practice, is calling g_type_class_add_private() in the class_init function. This reserves space within each object for your private struct. Then when you need to use it, g_type_instance_get_private() gives you a pointer to this space.
The init method is the equivalent to a constructor in C++. The class_init method has no equivalent, because all the work done there is done behind the scenes in C++. For example, in a class_init function, you might specify which functions override the parent class's virtual functions. In C++, you simply do this by defining a method in the class with the same name as the virtual method you want to override.
As far as I can tell, the only problem with your code is the fact that the GdkWindow of a GtkWidget (widget->window) is only set when the widget has been realized, which normally happens when gtk_widget_show is called. You can tell it to realize earlier by calling gtk_widget_realize, but the documentation recommends connecting to the draw or realize signal instead.

Whys is it a bad idea to have an Object[] array?

I was explaining to a friend a few days ago the concept or inheritance and containers.
He has very little programming knowledge so it was really just a friendly chat.
During the conversation he came to me with a question that i just couldn't answer.
"Why cant you just have an array of the top level class, and add anything to it"
I know this is a bad idea having being told so before by someone far smarter but for the life of me i couldn't remember why.
I mean we do it all the time with inheritance.
Say we have class animal which is parent of cat and dog. If we need a container of both of these we make the array of type animal.
So lets say we didn't have that inheritance link, couldn't we just use the base object class and have everything in the one container.
No specific programming language.
Syntactically, there is no problem with this. By declaring an array of a specific type, you are giving implicit information about the contents of that array. You could well declare a contain of Object instances, but it means you lose all the type information of the original class at compile-time.
It also means that each time you get an object out of the array at runtime, the only field instances and methods you know exist are the fields/methods of Object (which arguably is a compile time problem). To use any of the fields and methods of more specific subclasses of the object, you'd have to cast.
Alternatively, to find out the specific class at runtime you'd have to use features like reflection which are overkill for the majority of cases.
When you take elements out of the container you want to have some guarantees as to what can be done with them. If all elements of the container are returned as instances of Animal (remember here that instances of Dog are also instances of Animal) then you know that they can do all the things that Animals can do (which is more things than what all Objects can do).
Maybe, we do it in programming for the same reason as in Biology? Reptiles and Whales are animals, but they are quite different.
It depends on the situation, but without context, it's definitely okay in most (if not all) object-oriented languages to have an array of a base type (that is, as long as they follow all the substitution principles) containing various instances of different derived types.
Object arrays exist in certain cases in most languages. The problem is that whenever you want to use them, you need to remember what type they were, and stay casting them or whatever.
It also makes the code very horrible to follow and even more horrible to extend, not to mention error prone.
Plant myplant = new Plant();
listOfAnimals.Add(myplant);
would work if the list is object, but you'd get a compile time error if it was Animal.

How to handle an array of pointers in Objective-C

I figured out the answer to this question, but I couldn't find the solution on here, so posting it for posterity.
So, in Objective-C, how do you create an object out of a pointer in order to store it in objective-c collections (NSArray, NSDictionary, NSSet, etc) without reverting to regular C?
NSValue *pointerObject = [NSValue valueWithPointer:aPointer];
This will wrap the pointer in an NSValue. To get it back out later use NSValue's instance method (pointerValue:)
An alternative solution is to define a class that has methods that access/manipulate the contents of the pointer, then add instances of that to the array.
Don't bother subclassing NSValue as it really adds nothing to the solution.
Something like:
#interface FooPtr:NSObject
{
void *foo;
}
+ fooPtrWithFoo: (void *) aFoo;
.... methods here ...
#end
I specifically chose an opaque (void *) as that tells the client "don't touch my innnards directly". In the implementation, do something like #define FOOPTR(foo) ((Foo *) foo) Then you can FOOPTR(foo)->bar; as needed in your various methods.
Doing it this way also makes it trivial to add Objective-C specific logic on top of the underlying datatype. Sorting is just a matter of implementing the right method. Hashing/Dictionary entries can now be hashed on foo's contents, etc...

Most difficult programming explanation

Recently I tried to explain some poorly designed code to my project manager. All of the manager classes are singletons ("and that's why I can't easily change this") and the code uses event dispatching everywhere that a function call would have sufficed ("and that's why it's so hard to debug"). Sadly it just came out as a fumbling mess of English.
Whats the most difficult thing you've had to convey to a non-technical person as a programmer? Did you find any analogies or ways of explaining that made it clearer?
Thread Synchronization and Dead-Locking.
Spending time on design, and spending time on refactoring.
Refactoring produces no client-visible work at all, which makes it the hardest thing in the project to justify working on.
As a second "not client-visible" problem, unit testing.
I was asked how the internet worked - I responded with "SYN, ACK, ACK". Keep forgetting it's SYN, SYN-ACK, ACK..
(source: inetdaemon.com)
My most difficult question began innocently enough: my girlfriend asked how text is rendered in Firefox. I answered simply with something along the lines of "rendering engine, Gecko, HTML parser, blah blah blah."
Then it went downhill. "Well how does Gecko know what to display then?"
It spiraled from there quite literally down to the graphics drivers, operating system, compilers, hardware archiectures, and the raw 1s and 0s. I not only realized there were significant gaps in my own knowledge of the layering hierarchy, but also how, in the end, I had left her (and me!) more confused than when I began.
I should've initially answered "turtles all the way down" and stuck with that. :P
I had a fun case of trying to explain why a program wasn't behaving as expected when some records in a database had empty strings and some were NULL. I think their head just about exploded when I told them empty string is just a string with 0 bytes in it while NULL means unknown value and so you can't actually compare it to anything.
Afterward I had one nasty headache.
1.) SQL: Thinking in sets, rather than procedurally (it's hard enough for us programmers to grasp!).
2.) ...and here's a great example of demystifing technical concepts:
How I explained REST to my wife
A lot of statements starting with "It's because in Oracle, ..." come to my mind.
The biggest hurdles are around "technological debt", especially about how the architecture was correct for this version but needs to be changed for next version. This is similar to the problem of explaining "prototype versus production" and "version 1.0 versus version 2.0".
Worst mistake I ever made was doing a UI mockup in NeXT steps UI Builder. It looked exactly like the end product would look and had some behaviour. Trying to explain that there was 6 months of work remaining after that was very difficult.
How recursion works...
Why code like this is bad:
private void button1_Click(object sender, EventArgs e)
{
System.Threading.ThreadStart start =
new System.Threading.ThreadStart(SomeFunction);
System.Threading.Thread thread = new System.Threading.Thread(start);
_SomeFunctionFinished = false;
thread.Start();
while (!_SomeFunctionFinished)
{
System.Threading.Thread.Sleep(1000);
}
// do something else that can only be done after SomeFunction() is finished
}
private bool _SomeFunctionFinished;
private void SomeFunction()
{
// do some elaborate $##%#
_SomeFunctionFinished = true;
}
Update: what this code should be:
private void button1_Click(object sender, EventArgs e)
{
SomeFunction();
// do something else that can only be done after SomeFunction() is finished
}
private void SomeFunction()
{
// do some elaborate $##%#
}
The importance of unit tests.
"Adding a new programmer a month to this late task will make it ship later. Never mind, read this book." (The Mythical Man-Month.) Managers still don't quite get it.
The concept of recursion - some people get it really hard.
I sometimes really have hard time explaining the concept of covariance/contravariance and the problems related to them to fellow programmers.
Convincing a friend that the Facebook application I developed really doesn't store her personal data (e.g. name) even though still displays it.
Why it'll take another four weeks to put this app into production. After all, it only took a week to do the rapid prototype. It "works" (or at least looks like it does) so I should be pretty much finished, shouldn't I?
Explanations that involve security, code quality (maintainability), normalized DB schemas, testing, etc. usually come off as a list of abstractions that don't have any visible effect on the app, so it's hard to explain what they really contribute to the project and why they're necessary. Sometimes analogies can only take you so far.
C pointers
*i
&i
Avoiding Dead-Locking in a multi-threaded environment.
I cleared confusion by explaining it visually on a white-board, drawing out two parallel lines and showing what happens when the reach the same points at the same time.
Also role-playing two threads with the person I was explaining it to, and using physical objects (book, coffee mug, etc) to show what happens when we both try to use something at once.
There's really no right or wrong answer-proper for this... it's all experiences.
The hardest thing I have had to explain to a non-tech person was why he couldn't get to his website when traveling abroad but his family member that lived there (with a totally different provider) could get to it. Somehow, "Fail in Finland" wasn't good enough.
The most difficult concepts to explain to people I would label programmers as opposed to developers are some of the most core paradigms of object orientated design. Most specifically abstraction, encapsulation and the king, polymorphism and how to use them correctly.
Expanding on that is the level of complexity of explaining what Inversion of Control is and why it is an absolute need and not just extra layers of code that doesn't do anything.
I was going to comment on Mikael's post, that some people just take the sequential programming and unfortunately just stay with that.
But that really means: two seriously hard to explain concepts:
monads in haskell (usually starting with: "That's like a function that returns a function that does what you really wanted to do, but ...")
deferreds in twisted/python ("That's like... ehhh... Just use it for a year or so and you'll get it" ;) )
Trying to explain why code was executed sequentially at all. Seemingly this is not at all intuitive for some non-programmers (i.e. my girlfriend).
Why you do not need character correct index handling in most cases when you use UTF-8 strings.
It's hard to explain why most software has bugs. Many non-technical people have no idea how complex software is, and how easy it is to overlook unexpected conditions. They think we are just too lazy to fix stuff that we know is broken.
There are 10 different types of people in the world.
The people who understand Binary and the people who dont....
To put it plainly, why development is the most difficult concept ever exposed to man kind. Not related to any programming language, but in general. And no I am not trying to provide myself or you with an ego boost, the only real limitations to this field is your mind.
Why? We don't work with constants and there are no boundaries, the only reason an AI that thinks like a human being doesn't exist yet is due to our own limitations. All other aspects need to adhere to some sort of law, development doesn't care about the laws of physics or any law for that matter hence the term development... evolution.

Does using lists of structs make sense in cocoa?

This question has spawned out of this one. Working with lists of structs in cocoa is not simple. Either use NSArray and encode/decode, or use a C type array and lose the commodities of NSArray. Structs are supposed to be simple, but when a list is needed, one would tend to build a class instead.
When does using lists of structs make sense in cocoa?
I know there are already many questions regarding structs vs classes, and I've read users argue that it's the same answer for every language, but at least cocoa should have its own specific answers to this, if only because of KVC or bindings (as Peter suggested on the first question).
Cocoa has a few common types that are structs, not objects: NSPoint, NSRect, NSRange (and their CG counterparts).
When in doubt, follow Cocoa's lead. If you find yourself dealing with a large number of small, mostly-data objects, you might want to make them structs instead for efficiency.
Using NSArray/NSMutableArray as the top-level container, and wrapping the structs in an NSValue will probably make your life a lot easier. I would only go to a straight C-type array if you find NSArray to be a performance bottleneck, or possibly if the array is essentially read-only.
It is convenient and useful at times to use structs, especially when you have to drop down to C, such as when working with an existing library or doing system level stuff. Sometimes you just want a compact data structure without the overhead of a class. If you need many instances of such structs, it can make a real impact on performance and memory footprint.
Another way to do an array of structs is to use the NSPointerArray class. It takes a bit more thought to set up but it works pretty much just like an NSArray after that and you don't have to bother with boxing/unboxing or wrapping in a class so accessing the data is more convenient, and it doesn't take up the extra memory of a class.
NSPointerFunctions *pf = [[NSPointerFunctions alloc] initWithOptions:NSPointerFunctionsMallocMemory |
NSPointerFunctionsStructPersonality |
NSPointerFunctionsCopyIn];
pf.sizeFunction = keventSizeFunction;
self.pending = [[NSPointerArray alloc] initWithPointerFunctions:pf];
In general, the use of a struct implies the existence of a relatively simple data type that has no logic associated with it nor should have any logic associated with it. Take an NSPoint for instance - it is merely a (x,y) representation. Given this, there are also some issues that arise from it's use. In general, this is OK for this type of data as we usually observe for a change in the point rather than the y-coordinate of a point (fundamentally, (0,1) isn't the same as (1,1) shifted down by 1 unit). If this is an undesirable behavior, it may be a better idea to use a class.

Resources