Creating Core Foundation classes - c

Since I can't seem to find any documentation on this subject, is it possible to create your own Core Foundation "class"? (classes as in ones that can be used with CFRetain() and CFRelease) I want to take advantage of the polymorphic capabilities and object inspection built into Core Foundation without the overhead of Objective-C or creating my own object hierarchy.

Be forewarned: I believe that Core Foundation doesn't have true inheritance without Objective-C loaded, and with Objective-C loaded you'll get the (minor) associated slowdowns anyway.
There probably won't be any documentation on it, but it might be possible. It certainly won't be clean. Try browsing through the CF-Lite source code (link is for Mac OS X 10.5.7) to get a feel for the framework's implementation.
Note that if the overhead of Objective-C you mention is the overhead of message invocation, there are a great many ways to optimize it (for example, the -instanceMethodForSelector: method). You're very likely to spend more time trying to worm your way into the Core Foundation framework than you are trying to optimize Objective-C code to bring it up to speed.

Technically, there are no Core Foundation classes. They are opaque types.

Related

What is the difference in the extension of postgres?

I want to create a new data type and new operators in PostgreSQL.
I saw that in the documentation, it is possible to incorporate new source files in C (for example) and create a new data type and operators. PostgreSQL is extensible in that direction. More information at: documentation
But also the PostgreSQL has open source, and I could alter the source code and add a new data type, compiling a new version.
With that, I want to know what the differences, advantages and disadvantages of each method of including a new data type in PostgreSQL. I'm very concerned about the performance in query processing.
Thank you.
If you modify PostgreSQL you have to maintain the whole code-base, and you have to do your patching every time you want to upgrade even between minor versions. If you make an extension you only have your little extension to maintain. And it's also much more easy to distribute a small extension program if you ever want to do that.
I fully agree with Jachim's answer.
Another thing is:
Developing your own C-Language extension in PostgreSQL is (rather) well documented - simple programs can be done by compiling just one function of code and writing the corresponding functions. Adding a custom datatype is a bit more complicated, but still doable. The extension I developed was even written in C++ with just a bit of wrapper glue between PostgreSQL's plain C - that made developing much more flexible.
Altering the PostgreSQL core however is more complicated in terms of how do you start and what do you do. And in the end, you archive the same.
To sum it up: C-Language functions gives you all the advantages:
High performance by utilising PostgreSQLs internal datatypes
Simple programming interface
Just a small bit of code with a documented and presumably very stable function interface
I can not see any advantages of altering PostgreSQLs core, but many disadvantages:
Long compilation times
Maintaining your own code branch and regularly reapplying your patch to the current release
Higher risk of bugs.
If you need an example of a lot of different ways to use the C-Language interface, have a look at the PostGis source code - they use nearly all function types and have a lot of fancy tricks in their code.
There are no differences between internal operators and datatypes and custom operators and datatypes - and then it has same performance. Outer and inner implementation respects same rules and patters. So there is not reason for hacking Postgres for it.
We have patched PostgreSQL in GoodData - and we have a own extensions too - everywhere where it is possible and practical, we use a custom extensions - and where it is not possible, we use a own hacks - backports from 9.2, 9.3, some enhancing pg_dump and psql, statistics - but we have a active PostgreSQL's hackers in company. It is not usual. For users without experience with PostgreSQL hacking are creating extensions safe and good performance solution.

What are Advantages to Content Repositories (not talking about CMS's)

Given that a lot of people use content repositories. There must be a good reason. I'm building out a new web application that will need to store content. Can someone help me understanding this?
What are the advantages to using a content repository like Apache Jackrabbit as opposed to writing your own code/API to store images or text pages? Writing your own requires time etc. but so too does implementing and learning a new framework like the content repository API. A benefit to rolling your own seems to me that you know your code and have immediate expertise if you need to enhance or fix it. Using another framework you need to learn its foibles, and it is always easier to modify code you know that don't know... i.e. you don't know that underlying framework code as well as your own.
As I said a lot of people use them. There must be a reason. I can't see it as being just another "everyone is using them so, so should we." At least I hope it isn't that. :)
A JCR repository allows you to store all your content (from structured database-type data to large multimedia files) in a single place and with a single API, which is extremely convenient and makes your code simpler, avoiding the impedance mismatch between files and data that you usually have in content-based systems.
JCR also provides a lot of infrastructure functionality that you won't have to build or assemble yourself: search (including full-text), observation (callbacks when something changes) versioning, data types including multi-value, ordered nodes, etc...
If you allow a shameless plug, my "JCR - best of both worlds" article at http://java.dzone.com/articles/java-content-repository-best describes this in more detail and also provides a reading list for the JCR spec, that should allow you go get a good overview without reading the whole thing.
The article uses Apache Sling for its examples, which combined with a JCR repository provides a very nice (IMO, but as a Sling committer I'm biased ;-) platform for content-based applications.
My most recent projects have involved both choices: a custom-built data store (MySQL and image files) wtih a multi-level caching mechanism, and a JCR-based commercial repository.
A few thoughts:
In the short run, a DIY solution offers reduced complexity: you only have to build and learn what you need. And there is at least the opportunity to optimize
the data store for your particular application's needs -- more than likely speed of retrieval, but possibly storage footprint, security, or reliability concerns are foremost for you.
However, in the long run, you're looking at a significant increment of work to extend the home-grown system to a new content type (video, e.g.) or to provide new functionality (maybe,
versioning).
Also, it's difficult to separate the choice of a data store approach from the choice of tools that content providers will use to populate and maintain the data store. You'll have to give
your authors something more than an HTML form with a textarea and a submit button.
This is related to the advantages of standardization: compatibility and interchangeability. If everybody writes his own library and API, there is no compatibility and interchangeability, leading to higher cost.

What are the main downfalls when using Google Web Toolkit (GWT)

After a long debate between many RIA/Ajax framework, we settled on GWT. When reading about it, this framework seams to be doing everything well and easy. But like any technologies, there is always down side and we we learn them the hard-way.
What are the main downfalls or problems when using Google Web Toolkit (GWT)?
(eg: Back/Forward Button support, Slow Response time, Layout Positioning, JavaScrit bugs, etc)
So far, I got the following from the response:
Lots of code for simple UI
Slow compilation
Thank you
I have been using GWT for nearly 2 years. Although I could be called a fanatic about GWT, there are some issues that one should know ...
As others have said, JavaScript compilation is slow. My application requires nearly 4 minutes for core i7 CPU, 8 GB memory. Total size of generated JavaScript is about 5 MB. But thanks to development mode, compilation to JavaScript is not needed frequently.
GWT RPC is extremely slow in development mode. It is 100 times slower than hosted mode. It was quite a big problem for us. We did consider giving up GWT just because of this reason. the reason for this sluggish performance of GWT RPC in dev-mode is serialization. Serialization of types other than String is unbelievably slow in dev mode. We did implement our custom serialization, it is nearly 30 times faster than GWT built-in serialization.
Claims that writing GWT application requires only knowledge of Java is just an illusion. You should have solid information about CSS and DOM. If you don't, you will spend too much time debugging your user interface.
You should consider that you can only use a small subset of the JDK to implement GWT applications. Reflection is not available; you should use third party libraries, such as GWT ENT, or write your own generator for reflection.
Another caveat that one should consider is the size of generated JavaScript by the GWT compiler. Most of the GWT applications consist of a single web page, as opposed to multi-paged traditional web applications. Therefore, loading of the application requires significant time. Although it could be mitigated by using a multi-module approach and code splitting, using these techniques is not always straightforward.
All calls to the server are asynchronous. You should adapt yourself to writing asynchronous code. And the downside of asynchronous code is it is more complex and less readable than the equivalent synchronous code.
Here are my observations on the downfalls:
steep learning curve if one wants to use GWT effectively in large applications, due to enormous number of high level conventions associated with GWT.
Model View Presenter paradigm - in fact there are 2 different approaches to MVP proposed on GWT site.
UiBinder
CellWidgets
Editors
concept of Activities and Places
RequestFactory
AutoBeans
asynchronous requests require different mode of thinking when it comes to designing the whole application
long compilation time, it does not affect Development Mode as much as full builds (all the permutations for all the browsers and languages are compiled which can take hours for big projects). JRebel can reduce requirement of Development Mode restarts a bit.
problems with unit testing - GWTTestCase starts so long that it is unusable for unit testing. However thanks to GWTTestSuite it can work well for integration testing. Thanks to keeping clean MVP it is also possible to unit test Presenter logic by mocking Displays (see my answer).
it requires some experience to decide whether specific logic should be implemented client side (compiled to JS) or server side.
and of course there are some small bugs, especially in new features like Editors and RequestFactory. They are usually resolved quickly with new releases, however it could be annoying when you encounter some GWT issue. Anyway the last downfall applies to any Java framework I have been using so far. ;)
lack of reflection on client side, which could be resolved with Deffered Binding and Generators, but it is another convention to learn.
If I was to start new GWT project I would:
add dependency on Google GIN library (unfortunately it does not work with GWT 2.2 at the moment, but should be compatible soon).
design general layout with LayoutPanels
structure application "flow" according to concept of Places and Activities.
put all the Places into separate GWT module (common navigation references)
put each Activity in own GWT module (it could help in application code splitting later on)
treat Activity as glue code which has View and Presenter providers injected with GIN
design data entities to be compatible with RequestFactory
create all data editors with UiBinder, MVP and Editors framework in mind
use RequestFactory in Presenters, as well as in Activities (to fetch initial data to be shown).
inject with GIN every identified common component like standard date format, etc.
The spring roo tool can generate a lot of GWT based code for standard application elements.
I did a prototype app with GWT some time ago, and I found that the time it takes to compile the java to javascript took a very long time. Even more the time to compile increased noteably with each line of code we wrote.
I just wasn't happy with the code, compile test phase getting slower and slower through time.
Another question on SO about the compiler: How do I speed up the gwt compiler?
I think that main disadvantage is that GWT often requires to write lots of code, to acomplish simple tasks (but it's getting better and better with each release). On the other hand it's brilliant when it comes to developing complex, custom widget where it shines.
During couple of projects GWT has proved to be very good in terms of performance and there hasn't been many bugs - it's very good in terms of cross-browser support imo.
as a fan of nativity...
I prefer JQuery rather than GWT, because, it's easy to animate or accomplish complecated tasks without writing many classes..

Has inheritance become bad?

Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code.
However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer.
Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.)
In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases).
Does that make sense? Or am messing things up? How do you handle inheritance and serialization?
There are indeed a few gotcha with inheritance and serialization. One is that it leads to an asymmetry between serialization/deserialization. If a class is subclassed, this will work transparently during serialization, but will fail during deserialization, unless the deserialization is made aware of the new class. That's why we have tags such as #SeeAlso to annotate data for XML serialization.
These problems are however not new about inheritance. It's frequently discussed under the terminology open/closed world. Either you consider you know the whole world and classes, or you might be in a case where new classes are added by third-parties. In a closed world assumption, serialization isn't much a problem. It's more problematic in an open world assumption.
But inheritance and the open world assumption have anyway other gotchas. E.g. if you remove a protected method in your classes and refactor accordingly, how can you ensure that there isn't a third party class that was using it? In an open world, the public and internal API of your classes must be considered as frozen once made available to other. And you must take great care to evolve the system.
There are other more technical internal details of how serialization works that can be surprising. That's for Java, but I'm pretty sure .NET has similarities. E.g. Serialization Killer, by Gilad Bracha, or Serialization and security manager bug exploit.
I ran into this on my current project and this might not be the best way, but I created a service layer of sorts for it and its own classes. I think it came out being named ObjectToSerialized translator and a couple of interfaces. Typically this was a one to one (the "object" and "serialized" had the exact same properties) so adding something to the interface would let you know "hey, add this over here too".
I want to say I had a IToSerialized interface with a simple method on it for generic purposes and used automapper for most of the conversions. Sure, it's a bit more code but hey whatever, it worked and doesn't gum up other things.

Is ActiveRecord bad practice?

I'm starting a new project and i've recently found castle project activerecord, which seems like a GREAT solution, but at the same time, it looks like something realy unconventional.
I was wondering, does this feeling comes from learning something new (and i should just get used to it) or is really bad practice?
Part of what felt weird to me about using ActiveRecord was having to inherit from ActiveRecordBase<T>, and having all those persistence methods on your object (Save and so forth).
But it turns out you don't have to! Instead of having, say:
[ActiveRecord]
class Customer : ActiveRecordBase<Customer> { }
You can just have
[ActiveRecord]
class Customer : inherit from whatever you want { }
and then use ActiveRecordMediator<Customer>. It has basically the same static methods that ActiveRecordBase<T> has, but this way you don't have to clutter your object model with them. If you don't need the various protected method event hooks in ActiveRecordBase<T>, this can make things simpler.
ActiveRecord is a design pattern first named by Martin Fowler in Patterns of Enterprise Application Architectures. It is fairly common and used extensively in the popular Ruby framework Rails.
It contrasts with the more usual style of development in the .Net world which is to use DAOs, and that perhaps explains why you're uneasy.
A suggestion: read the source code for some Ruby on Rails applications which are similar to your own projects, and evaluate how you like the design style that results from heavy use of ActiveRecord.
It's not a bad solution but it has it's downsides.
In Patterns of Enterprise Application Architecture Martin Fowler describes several ways of designing applications that are built on top of a database. These methods differ in the way the application is decoupled from the database. He also describes that more decoupling makes more complex applications possible. Active Record is described as a way to design simpler applications but for applications with more complex behaviour you need a Domain Model that is independent of the database and something like an object-relational mapper in between.
ActiveRecord works very well in Ruby, but it's not easily transferable to all languages. The central feat of AR is the metaphor of table=class, row=instance. This comes out quite elegant in Ruby, because classes are also objects. In other languages, classes are usually a special kind of construct, and then you have to go through all sorts of hoops to make it work like properly. This takes away some of the natural feel that it has in Ruby.
The mixture of the domain object with the service layer is the biggest bad practice (if you see that as a bad practice). You end up calling user.Save() which means if you want to change your ORM, you are reliant on this pattern. The 2 alternatives are a layer aka a set of facade classes to perform your CRUD operations, or to put this inside the domain object as something like
User.Service.Save(user);
If you're using .NET then ActiveRecord is obviously ActiveRecord based, Coolstorage, Subsonic and a few others.
No it's not bad practice. Even in .NET it's a fairly well established pattern now. SubSonic (http://subsonicproject.com) and LINQ to SQL also use the pattern.
Implementations of the pattern, such as Subsonic, are great for quickly and easily creating a data access layer that manages the CRUD for your application.
That doesn't mean it's a good solution for all systems. For large, complex systems you probably want to have less coupling to the data store.
I think ActiveRecord has not much to do with Castle and therefore, the answers to this question Does the ActiveRecord pattern follow/encourage the SOLID design principles? could be more enlightening to many.

Resources