Angular 2 architecture for server-side communication - angularjs

Learning Angular 2. What would a recommended file structure for having components communicating with a server?
So a feature, say a todo feature. It may have a main todo-component, a todo-list-component, a todo-item-component, new-todo-component and a todo-service (and probably more).
Then there is another feature, say a personal activity timeline, which takes several source (including new and finished todos) and present it to the user. Say it may have the same type of files, but so different that we could not combine them into generic ones: a main timeline-component, a timeline-list-component, a timeline-item-component and a timeline-service.
Then we want for both the todo and the timeline features to communicate with the server. Since they both access partly the same data, perhaps a good idea would to have a backend-service to take care of the server communication for both features.
But how should the components get the data it's need? Should the todo components ask the todo-service which in turn asks the backend-service (and similar for the timeline components)? Or should the components better use the backend-service directly, so for example the todo components would use backend-service for backend stuff and todo-service and for other things that is naturally to put in a service? Since this is async and there are observables involved (which in the first case would need to be sent over multiple "steps" somehow), perhaps the latter is a simpler/cleaner approach?

Ideally the Component should use a proper Service to pipe itself into the data flow with Observables.
If we look for instance at the Chat application example we can see that each Service has a clear responsibility for the data management.
I wouldn't allow a Component to access a generic Http service as it would need to host too much logic to communicate with the server: the component doesn't care about the data source, just shows the data.

Related

How to deal with hierarchical data with Reflux stores?

Outline
In my app I'm using React and Reflux and have a hierarchical setup with regards to my data. I'm trying to break elements of my app into separate stores to be able to hook events correctly and separate concerns.
I have the following data flow:
Workspaces -> Placeholders -> Elements
In this scenario, when a workspace is created a default placeholder must in turn be created with a reference (ID) to the newly created workspace. The same applies for the placeholder to element relationship.
Sticking point
The Reflux way seems to suggest the PlaceholderStore listens to the triggers from WorkspaceStore, adding the newly created ID to this.trigger().
Reflux only allows a single event to be triggered from stores; thus preventing external components being able to discern create or update actions. This means that if one trigger in the store sends an ID as argument[0], subsequent triggers should do the same (to remain consistent). This is a problem for components looking for updates to multiple workspaces (e.g. re-ordering / mass updates).
Undesirable solution
I had thought to add in a concept of StoreActions; Actions that only stores can create, that other stores would then listen to (effectively discarding the original trigger from stores). With this components / stores could listen to specific events, and the arguments passed to said events could be tailored without worry. This seems like a wrong way to go and an abuse of the Reflux event system.
Help
Should I be trying to break up related data? Is there a better way to structure the data instead?
I've read about aggregate stores, but not seen any implementations to dissect. Do these offer a solution by way of bringing data from multiple stores together, and if so, what is responsible for creating events React components can listen to?
Many thanks for any help / insight anyone can offer!
Yes, it is perfectly reasonable to call actions from a store. I see actions as initiators of data flows and I consider exceptional flows as seperate ones.
A good example is a CRUD store that also handles AJAX calls (to CRUD the data with server-side). The store will trigger the change event as soon as it's data gets updated. However in the event that an AJAX call fails, it should instead start a data flow for that instead so that other stores and components can listen in on those. From the top of my head such errors are in the interest of a toast/notification component and Analytics error logging such as GA Exceptions.
The AJAX example may also be implemented through the preEmit hook in the actions and there are several examples among the github issues discussion on that. There is even this "async actions" helper.
It's by design that the stores only emit a change event. If you want to emit other kinds of events, it basically means you're starting new data flows for which you should be using actions instead.

Om but in javascript

I'm getting to be a fan of David Nolen's Om library.
I want to build a not-too-big web app in our team, but I cannot really convince my teammates to switch to ClojureScript.
Is there a way I can use the principles used in om but building the app in JavaScript?
I'm thinking something like:
immutable-js or mori for immutable data structures
js-csp for CSP
just a normal javascript object for the app-state atom
immutable-js for cursors
something for keeping track of the app-state and sending notification base on cursors
I'm struggling with number 5 above.
Has anybody ventured into this territory or has any suggestions? Maybe someone has tried building a react.js app using immutable-js?
Edit July 2015: currently the most promising framework based on immutability is Redux! take a look! It does not use cursors like Om (neither Om Next does not use cursors).
Cursors are not really scalable, despite using CQRS principles described below, it still creates too much boilerplate in components, that is hard to maintain, and add friction when you want to move components around in an existing app.
Also, it's not clear for many devs on when to use and not use cursors, and I see devs using cursors in place they should not be used, making the components less reusable that components taking simple props.
Redux uses connect(), and clearly explains when to use it (container components), and when not to (stateless/reusable components). It solves the boilerplate problem of passing down cursors down the tree, and performs greatly without too much compromises.
I've written about drawbacks of not using connect() here
Despite not using cursors anymore, most parts of my answer remains valid IMHO
I have done it myself in our startup internal framework atom-react
Some alternatives in JS are Morearty, React-cursors, Omniscient or Baobab
At that time there was no immutable-js yet and I didn't do the migration, still using plain JS objects (frozen).
I don't think using a persistent data structures lib is really required unless you have very large lists that you modify/copy often. You could use these projects when you notice performance problems as an optimization but it does not seem to be required to implement the Om's concepts to leverage shouldComponentUpdate. One thing that can be interesting is the part of immutable-js about batching mutations. But anyway I still think it's optimization and is not a core prerequisite to have very decent performances with React using Om's concepts.
You can find our opensource code here:
It has the concept of a Clojurescript Atom which is a swappable reference to an immutable object (frozen with DeepFreeze). It also has the concept of transaction, in case you want multiple parts of the state to be updated atomically. And you can listen to the Atom changes (end of transaction) to trigger the React rendering.
It has the concept of cursor, like in Om (like a functional lens). It permits for components to be able to render the state, but also modify it easily. This is handy for forms as you can link to cursors directly for 2-way data binding:
<input type="text" valueLink={this.linkCursor(myCursor)}/>
It has the concept of pure render, optimized out of the box, like in Om
Differences with Om:
No local state (this.setState(o) forbidden)
In Atom-React components, you can't have a local component state. All the state is stored outside of React. Unless you have integration needs of existing Js libraries (you can still use regular React classes), you store all the state in the Atom (even for async/loading values) and the whole app rerenders itself from the main React component. React is then just a templating engine, very efficient, that transform a JSON state into DOM. I find this very handy because I can log the current Atom state on every render, and then debugging the rendering code is so easy. Thanks to out of the box shouldComponentUpdate it is fast enough, that I can even rerender the full app whenever a user press a new keyboard key on a text input, or hover a button with a mouse. Even on a mobile phone!
Opinionated way to manage state (inspired by CQRS/EventSourcing and Flux)
Atom-React have a very opinionated way to manage the state inspired by Flux and CQRS. Once you have all your state outside of React, and you have an efficient way to transform that JSON state to DOM, you will find out that the remaining difficulty is to manage your JSON state.
Some of these difficulties encountered are:
How to handle asynchronous values
How to handle visual effects requiring DOM changes (mouse hover or focus for exemple)
How to organise your state so that it scales on a large team
Where to fire the ajax requests.
So I end up with the notion of Store, inspired by the Facebook Flux architecture.
The point is that I really dislike the fact that a Flux store can actually depend on another, requiring to orchestrate actions through a complex dispatcher. And you end up having to understand the state of multiple stores to be able to render them.
In Atom-React, the Store is just a "reserved namespace" inside the state hold by the Atom.
So I prefer all stores to be updated from an event stream of what happened in the application. Each store is independant, and does not access the data of other stores (exactly like in a CQRS architecture, where components receive exactly the same events, are hosted in different machines, and manage their own state like they want to). This makes it easier to maintain as when you are developping a new component you just have to understand only the state of one store. This somehow leads to data duplication because now multiple stores may have to keep the same data in some cases (for exemple, on a SPA, it is probable you want the current user id in many places of your app). But if 2 stores put the same object in their state (coming from an event) this actually does not consume any additional data as this is still 1 object, referenced twice in the 2 different stores.
To understand the reasons behind this choice, you can read blog posts of CQRS leader Udi Dahan,The Fallacy Of ReUse and others about Autonomous Components.
So, a store is just a piece of code that receive events and updates its namespaced state in the Atom.
This moves the complexity of state management to another layer. Now the hardest is to define with precision which are your application events.
Note that this project is still very unstable and undocumented/not well tested. But we already use it here with great success. If you want to discuss about it or contribute, you can reach me on IRC: Sebastien-L in #reactjs.
This is what it feels to develop a SPA with this framework. Everytime it is rendered, with debug mode, you have:
The time it took to transform the JSON to Virtual DOM and apply it to the real DOM.
The state logged to help you debug your app
Wasted time thanks to React.addons.Perf
A path diff compared to previous state to easily know what has changed
Check this screenshot:
Some advantages that this kind of framework can bring that I have not explored so much yet:
You really have undo/redo built in (this worked out of the box in my real production app, not just a TodoMVC). However IMHO most of actions in many apps are actually producing side effects on a server, so it does not always make sens to reverse the UI to a previous state, as the previous state would be stale
You can record state snapshots, and load them in another browser. CircleCI has shown this in action on this video
You can record "videos" of user sessions in JSON format, send them to your backend server for debug or replay the video. You can live stream a user session to another browser for user assistance (or spying to check live UX behavior of your users). Sending states can be quite expensive but probably formats like Avro can help. Or if your app event stream is serializable you can simply stream those events. I already implemented that easily in the framework and it works in my production app (just for fun, it does not transmit anything to the backend yet)
Time traveling debugging ca be made possible like in ELM
I've made a video of the "record user session in JSON" feature for those interested.
You can have Om like app state without yet another React wrapper and with pure Flux - check it here https://github.com/steida/este That's my very complete React starter kit.

Using Skue or similar frameworks to build REST API on google-app-engine

Searching for ways to build REST APIs, I found skue (https://code.google.com/p/skue/). However there is not much information on the site. My plan is to build a rest api as follows strictly:
Models << Business logics << Restful Resources.
What this means is: the models are access exclusively by the business logic; the restful resources interface is the only layer a client has direct access to. I am specifying all this to avoid people suggesting using the appengine-rest-server.
My question is: has anyone ever successfully used Skue? If so do you have any examples you would not mind sharing? GET and POST would be sufficient, but more is welcomed. If not Skue, are there any frameworks out there that allow building such rest-apis on top of the google-app-engine?
I'm the author of Skuë. Skuë means "mouse" in Bribrí which is the language of an indigenous group of people of Costa Rica, my Country.
I know there isn't enough information on the site: (https://code.google.com/p/skue/)
For developers that want to use it on their own projects. I'm sorry for that I just haven't had the time to do a proper documentation since this is just a side project and not my daily work.
However, I'm willing to help you out with ramping up so you will be able to use it. The first thing to notice is the small example that it's part of the source code. Go to the site then click on Source -> Browse and then expand the "app" branch.
The code inside of the "app" folder represents your own API implementation. The package "skue" contains the actual implementation of the library so basically you just create your Python project for Google App Engine and includes the skuë package directly into it.
Now overwrite your main.py file with the content of the downloaded main.py: main.py on Skuë project.
The most important part of that file is where you put your own routes to your resources implementations: Notice here the use of the "ContactResource".
TASK_HANDLERS = [
]
API_HANDLERS = [
('/contacts/(.*)', ContactResource)
]
API_DOC = [ ('/', ApiDocumentationResource) ]
Browse to the contact resource implementation.
There are a lot of things going on under the hood there.. but the idea is for you to not worry about those.
You need to inherit from the proper Resource parent class depending on the kind of resource you want to create, there are four basic types:
DocumentResource: A document resource is a singular concept that is akin to an object instance or database record.
CollectionResource: A collection resource is a server-managed directory of resources. Clients may propose new resources to be added to a collection. However, it is up to the collection to choose to create a new resource, or not.
StoreResource: A store is a client-managed resource repository. A store resource lets an API client put resources in, get them back out, and decide when to delete them.
ControllerResource: A controller resource models a procedural concept. Controller resources are like exe- cutable functions, with parameters and return values; inputs and outputs.
Like a traditional web application’s use of HTML forms, a REST API relies on controller resources to perform application-specific actions that cannot be logically mapped to one of the standard methods (create, retrieve, update, and delete, also known as CRUD).
Now take a look at the "describe_resource" implementation on ContactResource example. When you inherit from the basic resource types described above the next step is to programmatically describe your resource to the outside world using that method. The underlying Skuë implementation uses that method to validate require parameters and also to self describe the endpoints when you perform an OPTIONS request on them.
And the final step is for you to implement the methods (CRUD) that you want to handle for your resource.
Again with the ContactResource example, that resource handles the creation, update and read of Contact items.
I hope this helps you at least to understand how to start using the library. I will create better tutorials in the future, though.
In the meantime you can contact me via email: greivin.lopez#gmail.com and I will send you a more elaborated example or even something that matches your requirements.
Important Note: Currently the Skuë project only supports responses in JSON format. If you plan to use another format you will need to create the proper classes to handle it.
Greetings from Costa Rica.
I haven't used skue, but what you're looking for sounds like a good fit for Google Cloud Endpoints. See my previous answers on the subject for more details.

Storing user data for a C-based Native Client instance

I have been working on C-based Native Client module for Google Chrome. Many of the module functions that are called by the NaCl system have a parameter of PP_Instance which uniquely identifies the module instance.
My question: Is there any way to associate user data with this instance handle?
The C API specifies that it is an opaque handle. It provides no functions for linking user data to the handle. Right now, I have to use a bunch of global variables within the module to share state among the functions. It doesn't feel like the right solution. I'm not sure if more than one instance will ever share the process space but I'm not making any assumptions here.
I suppose I could implement some sort of look up table to map instances to unique contexts that happen to live in the global scope. But that also seems like it should be unnecessary for a C-based API. The C++ API avoids this by virtue of its classes.
PP_Instance should be used as a key to lookup state / object associated with the plugin instance. More than one plugin instance may be instantiated in a module as per the API, when, for example, multiple embed tags are present in the containing frame. Currently the NaCl implementation of Pepper does not do this -- instead, multiple processes each containing a single module each instantiating a single pepper plugin instance is created. However, this is an implementation detail (or maybe bug?) that is subject to change, and it would be better to defensively program and be able to handle multiple DidCreate events.
Of course, if your NaCl module is guaranteed to never be used by anyone else and you know you won't ever have two embeds of the same module, then it might be okay to assume singleton instance and use global state, but doing things the "right" way isn't that hard, so why not?
See native-client-discuss thread for more discussion on this topic.

How does the Composite C1 architecture work?

Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.

Resources