A homework exercise about database architectures - database

I don't need any answers I am simply looking for a breakdown in laymans terms of what this essay is asking for.
I have done part A its very straight forward however I am not sure what he means when he refers to Applications. Part d is also simple enough.
My main problem is I am struggling with the context of part b, c. For whatever reason the question (perhaps the wording) is going straight over my head. If anyone could simply breakdown and explain part b and c I would very grateful, I can finish this essay up.
Any links would be an added bonus but certainly not expected thank guys.
a) Discuss the 2-tier architecture and the 3-tier architecture of database application processing in terms of architectural layout, applications, performance, security, advantages and disadvantages of such environment. (40 marks)
b) Discuss the software environment of both architectures including the use of high level and mark-up languages, servers with interfaces on different platforms, communication links and tools used. (20 marks)
c) Discuss future planned developments in these two architectures environments in terms of hardware configuration, software utilisation and applications areas. (20 marks)
d) Provide a conclusion of the report indicating a summary of your investigation by arguing the suitability of architectural layout software environment and application areas with reasoning. (10 marks)

Okay guys if anyone is ever tasked with somthing similar to this coursework. Part A refers to a comparison of the Architecture layouts of 2 and 3 tiered architecture, discuss the layers of each and the performance and security issues with each. Part B is asking to discuss HTML SQL and other languages used to communicate each architecture such as a Java Application using JDBC, at this point I discussed the SQL Api etc.. Part C I discussed the the layout of the client and servers in hardware terms, API distirbuted database etc... I hope this may help someone in the future. Si

Related

How to apply 12 Factor App to Linux driver developing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm an engineer currently developing Linux kernel-mode drivers and user-mode drivers. When I came across the theory of 12 Factor App, there is a strong voice echoing around my brain "THIS IS THE FUTURE OF DEVELOPING!".
And I kept wondering how to apply this method to Linux KMD and UMD design and developing since this theory is much too web-app based (I'm a part-time open-source web developer).
Current Developing language: C
Current Testing automation: Custom Implemented Python testing framework (Progress based, NO unit test)
Please give me some suggestions on this. Thanks and appreciated in advance.
As with most development guidelines, there is a gap between the guideline and the enforcement.
For example, in your "12 factor app" methodology, one of the factors is:
Codebase - One codebase tracked in revision control, many deploys
Which sounds great, and would really simplify things. Until you get to the point of utility libraries. You see, when you find you are reusing code across multiple projects, you probably want:
Independent build and release chains for the multiple projects.
This could mean two codebases, but the above states one codebase (perhaps one per project, perhaps one per company. Let's assume one per company first, which is easy to see as non-ideal; because, you would have commits unrelated to a project in the project's commit history. Ok, one per project, more sensible; but, what if projects need to share code? Like the libraries that format their communications and control the send / receive protocols? Well, we could create a third "protocol library" so that we have revisioning around the protocol; but, that violates the "one codebase (per project)" because now you have two codebases comprising the single releasable item.
The decisions here are not simple. The other approach is to copy the protocol code into both projects and keep them in sync by some other means.
Dependencies - Explicitly declare and isolate dependencies
It's a great idea; and, one that makes development easier in many ways. Again, just to illustrate how a great idea can suffer without clear guidelines on how to implement the idea, what do you do when you are using a library that doesn't attempt to isolate the dependencies the library uses? Many of the more complex libraries themselves depend on other libraries, and generally they clearly declare their dependencies, as do the libraries used by the libraries; however, sometimes the base, core libraries used by multiple projects (logging, configuration, etc) wind up being used at different release versions. The isolation occurred on a per-library basis, but not on a per-project basis. You could fix it, provided you wanted (or could) fork and clone the libraries, restructuring them to properly isolate their dependencies for overall coordination of version numbers; but, generally you will lack the time to work on other people's projects.
In general, the advice under "12 factor app" methodology is good; but, it leaves you up to performing the work of translating the guidelines into development protocols. Enforcement then becomes a matter of interpertation, and the means of enforcement (as well as the interpertation) fall on you to implement.
And some of the guidelines look dangerously over-simlpified:
Concurrency - Scale out via the process model
While this is an easier way to go, it's not how any single high performance web server works. They all use threading, thread pools, and other more complex constructs to avoid process switching. These constructs (which are admittedly harder to use) were created specifically due to the limitations of a traditional process model. After all, it's not common to launch a process per web request, nor would you generally "tune a program for better performance" by starting a second copy on the same machine. Certainly, there are architectures where this could work; but, so far these architectures haven't outperformed their competition.
Between machines, I wholeheartedly agree. Process scalaing is the only way to go in a distrubuted environment; but, there's not much in this methodology that talks about distributed algorithms, or even distributed computing approaches; so, again it's another thing left up to the implementor.
Finally, their process commentary seems really out-of-place for writing a command line tool. The push to daemonize things works really well for microservices; however, you can't microservice away even the clients. Eventually you'll have to write something that isn't "managed by systemd", due to starting execution and ending execution without having an always-on service.
So, it's a good framework, which might not work for some things, even if it is excellent for many things; but, in my opinion, the tooling to enforce it would have to be built by the organization using it because the interpretations one organization might make could differ from another organization.

Theories of software engineering [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
In my career I've come across two broad types of theory: physical theories and educational/management theories:
Physical theories are either correct (under appropriate conditions) or incorrect, as judged by the physical world.
Educational/management theories have the appearance of being like physical theories, but they lack rigorous testing. At best they give new ways of thinking about problems. Multiple theories are useful because one of them may speak to you in the right way.
As an hobbyist student of software engineering there appear to be a lot of theories of software engineering (such agile programming, test driven design, patterns, extreme programming). Should I consider these theories to be physical-like or educational/management-like?
Or have I mis-understood software engineering and find myself being in the position of "not even wrong"?
Software engineering is ultimately about psychology, how humans manage complexity. So software engineering principles are far more like education and management theories than physical principles.
Some software engineering has solid math behind it: O(n log n) sorts are faster than O(n^2) sorts, etc. But mostly software engineering is about how humans think about software. How to organize things so that maintainers don't go crazy, anticipating what is likely to change and what is not, preventing and detecting human errors, etc. It's a branch of psychology or sociology.
I think the appropriate theoretical split is those "harder" sciences (where there can be proofs) and the softer topics with qualitative answers and few proofs if any.
Software to me is mostly about language and communication, a topic that is qualitative and subjective mostly. Every now and then we touch into algorithms and other "hard" areas, where proofs and rigorous formalisms exist. So, yes, both please.
Not even wrong.
All the software engineering "theories" seem to be nothing but advice on particular things to try to see if they make you and your team more productive. Even if one could set them up to be falsifiable like scientific theories, there would be very little point to it. That is not to say that it is not worthwhile to learn about them -- au contraire, you should become familiar with as many of them as possible and try to figure out in what kinds of teams and environment they may work better. But be careful: avoid dogma and thinking there are silver bullets.
i wouldn't call agile programming, test driven design, patterns, extreme programming, etc 'theories', they're methodologies, or work styles. they don't make any assertions.
Generally the field of Informatics is divided into 4 areas (need to find a link to the source, SWEBOK?), which are distinct although related and interconnected:
Computer Science
Software Engineering
Computer Engineering
Information Systems
There is a good analysis of engineering vs. science in Steve McConnel's "Professional Software Development". Check out his Software Engineering, Not Computer Science.
Software development is more about engineering - finding practical solutions to practical problems - than anything else. That is correct that software engineering relies on computer science, mathematics, complexity theory, systematics, psychology, and other disciplines, but it can not be equated to any of them, nor is it a batch of any of them.
Besides theories, there are also frameworks, models and rules of thumb. Ideas, sure, but based on a less rigorous foundation, which loosely belong to your eduction/management category.
Computer Science has some strong foundational theories (physical ones by your definition), but these mostly consist of tying together the smaller elements.
Software Engineering on the other hand, is a relatively new discipline that involves utilizing computers and occasionally Computer Science to build software systems. Most of the practice in that arena is entirely based on non-rigorous experimental and anecdotal evidence. Since the jury is still out on even the simplest of issues, most of what passes for practices could be best described as pure guess-work and irrational preference. It's one of those disciplines where you really do have to know a lot to realize how much is built on a house of very unstable cards.
Paul.
Being intangible, programming is a very difficult activity to relate to another human being, even other programmers. Software engineering tries to add structure where there is none, but such structure is not rooted in the inevitability of reality. So all these approaches become like religions in how groups of people behave when trying to appease their technical gods (or demons).
All these theories and best practices still haven't brought us to the point where we can produce software systems reliably and predictably. The newest of these surveys is dated 2001; Jeff's column from 2006 still laments high failure rates.
It'd be interesting to see if anybody's working on an updated survey.
Avionics and the software running my car don't seem to fail at anything close to the rates quoted for enterprise software. Why don't enterprise developers follow their practices more closely? Maybe we should all be writing Ada....[just kidding]
They're like recipes: they're guidelines, whose success depends:
Partly, on the quality of the recipe
Partly, on the quality of the ingredients
Partly, on the skill of (and time available to) the practitioners
For me, it's my own theory with many of the others used as a base. I don't know any one that uses a single specific theory. And that's not a cop out answer.
Just as there are different languages, theories/practices/methodologies are to be used in distinct situations. The structure, rules, and definitions are all the ways in which people understand how things are to be accomplished, but what is to be accomplished is subjective.
Adapt, knowing the agile, extreme, or other methods at the discretion of the client, project, programmer, time, and especially what makes you successful/happy. Be a team and adjust/adapt to what your team is doing for the greater good; just keep in mind to have something that you have defined in your own mind, or it's not just chaos.
[SOAPBOX]
I started programming on the Atari 400 with a converted flat keyboard and 64K upgrade. When I started college, it was VB 1.0 which I saw my Economics Teacher use to build a teaching tool to help people learn more about economics using graphs and visual inputs. That was cool! And I knew I could do that.
This same Economics Teacher, who later become an IT teacher too (he was good), asked if I would teach a class on debugging. He said, "I haven't met someone that understands the concepts and has a natural ability to debug as fast as you do, would you teach us what you know and how you do it." This was a boost in my ego, of course, but to teach, mentor, and help others.
Every one of those instances has fuled my desire to help other people. For me, I want a computer to do exactly what I want, to help other people in the business and home life to increase their qualify of living, learn more, and get more done.
Someone said to me one time, "You're only as good as your tools". Learn, practice, and grow.
If you've defined something, it's working, has order, and it stretches you and the boundaries, you're not wrong.
Is there a think like "software engineering"?
Or Is software development is "engineering"?
Facts:
Our industry is very young realtive to many other "engineerings".
We still does not have "solid" practices and "theories".
So to be honest, if we look in the other mature engineering practices perpective, it is hard to call what we do as "engineering"
We have a very bad reputation at failing [ our failing rates can not be acceptable at many engineering branches]
Reality or Fantasy? Pick one :-)
Some guys say that we do not have "solid" paractices and "theories" since we are a young "engineering" branch, by time we will have.Those guys say that wee need to work more "theory" or foundations.
Some guys say that software develepment is "experimental social activity" because of the nature of our problem domain. We will have practices theories methodologies processes but they will always have second order effect. The unique people, their feelings qualities and and their interactions with the rest are more influential. Those guys see software development as Complex Adaptive System
And there is also another reality
%80 of software development activities really do not need very brilland mind. Any "average" person can do it.
But remaining %20 part is ver hard and multidiciplinary task.
Even there is an another new perspective My One:-)
This view say that software development is not a branch of "Engineering". It is brach of "Natural Sciences and Social Sciences". So we need Software Anthropology and Anthropologist.
Theory: I think a theory is anything that describes "how" a natural system works, and in order to prove it, has logical deductions based on previous knowledge, substantiated by logical inductions made using experiments.
You call the whole body of these theories and experiments as Science.
Software: Software is a man-made system aka. an engineered system. Engineering applies Science in order to create the new systems. In that regards, pure Software Engineering applies the science of discrete mathematical systems.
But Commercial Software Engineering has a different motivation called Economics.
In that regards, it has to take all the factors into account that affect Economics, the chief of them being People. So, Psychology plays a huge part .
But, since Psychology itself is just a theory of "how" human mind works based on just pattern recognition without any logical deductions based on human biology, it has many flaws like correlation implies causation.
So, yeah, I think from the above answer, you can better understand what Commercial Software Engineering in total is .

What are the most important technical skills for a computer software expert witness?

As a computer software expert witness, I am required to analyze a huge range of different software technologies. During my deposition or trial testimony, the opposing expert may direct questions targeted at exposing or revealing my weaknesses. There is no time for research or education.
Given that I can't be an expert in every technology, what are the most versatile and transferrable skills or technologies I should learn?
I will start with the obvious:
Databases are omnipresent (but which are the best archetypes?)
C is often involved due to the prevalence of older Windows and DOS based systems
What should be added this list?
I may be mis-reading your question, but I suspect that if you are being called upon as an expert witness, you already have the expertise they are seeking; I suppose that learning more technical aspects of any technology would make you more likely to become an expert witness, but ultimately I would recommend the best skill would be of truthfulness. If you don't know, say so. Any unknown questions can then become the "to be studied" list for later review.
just my 2 cents ...
It would be silly to call you as an expert witness if you cannot be an expert in the line of questioning.
Well, the big thing about being a witness is to listen to the counsel for which you are testifying. In the computer world, your credibility is not easily impugned. If they were to try to do so, it would be by calling into question formal education or training as insufficient to be an expert. They won't be asking you to explain what a Turing Machine is, or how to write a sorting algorithm in LISP, unless it is directly relevant to the matter at hand. They won't be playing "Gotcha!" with difficult technical questions, as it won't resonate with the judge/jury .How many jury members can you picture saying this: "I can't BELIEVE that "expert" doesn't understand database normalization! what a fraud!"? If the jury doesn't understand the question, they won't understand the answer. Any 1st year law student will tell you all about this problem (it comes up it all kinds of expert testimony situations).
No, your credibility will be questioned in laymen's terms. If you are being asked to testify, it's because you have the answers that are relevant. Stick to those and don't do any tricks (as your counsel will tell you), and you'll be fine. If your information is correct, and your degree/experience is solid, you may not even be cross-examined (they will just find their own expert to say the opposite of what you said).
Computer software expert witnesses need to also have a good understanding of networking technology and be able to explain it to a jury or judge. Because a great deal of software is client/server based, being able to explain the way firewalls, ip address, http, internet routers works and why you can tell that certain pieces of software were definitely used at certain times and locations is important.
Being familiar with server operating systems and the log files they generate is also helpful.
I would say forget learning new technology outside understanding industry concepts and how they're really applied in the real world. The key thing you need to be able to do as an expert witness is explain these concepts in terms that can be easily understood by the layman. You already know this stuff or you wouldn't be the expert witness. You're there because your name and reputation are thought well of and they [prosecution/defence] need your help.
I think of it like this: The lawyer/barrister/attorney's job is to sell their vision of the truth and get the jury to buy into their vision [skewed as that vision may or may not be]. Your job is to sell the facts. Either the two are one and the same, or they aren't. Sell the facts to the best of your ability, if you have easily understood examples [by easily understood, I mean by an 8 year old], all the better.
Key concepts I would think would be software systems that people will use/exploit to either commit or to cover up a crime:
Networking systems: Common protocols, packet tracing etc.
Firewall systems and common exploits.
Viruses and replication: Worms vs. Trojans etc.
Major Operating Systems: Basic concepts and common exploits.
Web Applications: How they're structured and how they can be exploited.
Common hacking concepts: DoS, OOB, SQL Injection etc.
Email concepts: transmission, receipt, tracking, header information.
Data storage and recovery concepts and key software.
Surveillance Techniques: Packet analysis, key loggers etc.
I'm sure there are a few others, but no others immediately spring to mind
Definitely learn about email systems. I'd imagine email communications come into play fairly often in court cases these days. Learn how SMTP and POP3 work. Learn the basics of email servers and what ways they can be manipulated and how difficult it is to do.
i think you're deceiving yourself, what is a "computer software expert witness"?. That's like saying, you're an electrical engineer, and so you have the capacity to answer any engineering questions, whether they're from chemical, mechanical, civil or other specific area of engineering.

How to design and verify distributed systems?

I've been working on a project, which is a combination of an application server and an object database, and is currently running on a single machine only. Some time ago I read a paper which describes a distributed relational database, and got some ideas on how to apply the ideas in that paper to my project, so that I could make a high-availability version of it running on a cluster using a shared-nothing architecture.
My problem is, that I don't have experience on designing distributed systems and their protocols - I did not take the advanced CS courses about distributed systems at university. So I'm worried about being able to design a protocol, which does not cause deadlock, starvation, split brain and other problems.
Question: Where can I find good material about designing distributed systems? What methods there are for verifying that a distributed protocol works right? Recommendations of books, academic articles and others are welcome.
I learned a lot by looking at what is published about really huge web-based plattforms, and especially how their systems evolved over time to meet their growth.
Here a some examples I found enlightening:
eBay Architecture: Nice history of their architecture and the issues they had. Obviously they can't use a lot of caching for the auctions and bids, so their story is different in that point from many others. As of 2006, they deployed 100,000 new lines of code every two weeks - and are able to roll back an ongoing deployment if issues arise.
Paper on Google File System: Nice analysis of what they needed, how they implemented it and how it performs in production use. After reading this, I found it less scary to build parts of the infrastructure myself to meet exactly my needs, if necessary, and that such a solution can and probably should be quite simple and straight-forward. There is also a lot of interesting stuff on the net (including YouTube videos) on BigTable and MapReduce, other important parts of Google's architecture.
Inside MySpace: One of the few really huge sites build on the Microsoft stack. You can learn a lot of what not to do with your data layer.
A great start for finding much more resources on this topic is the Real Life Architectures section on the "High Scalability" web site. For example they a good summary on Amazons architecture.
Learning distributed computing isn't easy. Its really a very vast field covering areas on communication, security, reliability, concurrency etc., each of which would take years to master. Understanding will eventually come through a lot of reading and practical experience. You seem to have a challenging project to start with, so heres your chance :)
The two most popular books on distributed computing are, I believe:
1) Distributed Systems: Concepts and Design - George Coulouris et al.
2) Distributed Systems: Principles and Paradigms - A. S. Tanenbaum and M. Van Steen
Both these books give a very good introduction to current approaches (including communication protocols) that are being used to build successful distributed systems. I've personally used the latter mostly and I've found it to be an excellent text. If you think the reviews on Amazon aren't very good, its because most readers compare this book to other books written by A.S. Tanenbaum (who IMO is one of the best authors in the field of Computer Science) which are quite frankly better written.
PS: I really question your need to design and verify a new protocol. If you are working with application servers and databases, what you need is probably already available.
I liked the book Distributed Systems: Principles and Paradigms by Andrew S. Tanenbaum and Maarten van Steen.
At a more abstract and formal level, Communicating and Mobile Systems: The Pi-Calculus by Robin Milner gives a calculus for verifying systems. There are variants of pi-calculus for verifying protocols, such as SPI-calculus (the wikipedia page for which has disappeared since I last looked), and implementations, some of which are also verification tools.
Where can I find good material about designing distributed systems?
I have never been able to finish the famous book from Nancy Lynch. However, I find that the book from Sukumar Ghosh Distributed Systems: An Algorithmic Approach is much easier to read, and it points to the original papers if needed.
It is nevertheless true that I didn't read the books from Gerard Tel and Nicola Santoro. Perhaps they are still easier to read...
What methods there are for verifying that a distributed protocol works right?
In order to survey the possibilities (and also in order to understand the question), I think that it is useful to get an overview of the possible tools from the book Software Specification Methods.
My final decision was to learn TLA+. Why? Even if the language and tools seem better, I really decided to try TLA+ because the guy behind it is Leslie Lamport. That is, not just a prominent figure on distributed systems, but also the author of Latex!
You can get the TLA+ book and several examples for free.
There are many classic papers written by Leslie Lamport :
(http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html) and Edsger Dijkstra
(http://www.cs.utexas.edu/users/EWD/)
for the database side.
A main stream is NoSQL movement,many project are appearing in the market including CouchDb( couchdb.apache.org) , MongoDB ,Cassandra. These all have the promise of scalability and managability (replication, fault tolerance, high-availability).
One good book is Birman's Reliable Distributed Systems, although it has its detractors.
If you want to formally verify your protocol you could look at some of the techniques in Lynch's Distributed Algorithms.
It is likely that whatever protocol you are trying to implement has been designed and analysed before. I'll just plug my own blog, which covers e.g. consensus algorithms.

What are areas where you can program artificial intelligence? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Welcome!
I very enjoyed programming artificial intelligence in my studies - neural networks, expert machines and other. But in work I develop mainly web applications.
And now I think about returning to such programming, maybe in hobby, or maybe in work. Are there areas where AI is commonly used in applications development and programmer with such skills can search work?
Or maybe I can sold some ideas to my boss and use AI to extend some of our applications.
What are you experience and ideas with using AI in applications?
I recently started reading the book Programming Collective Intelligence. It's an excellent book which discusses exactly what you are looking for - using AI techniques in web applications.
The book is written clearly, is easy to understand, explains everything in terms of real applications (it covers how some commonly used technology works: Google Pagerank, Amazons recommendation system, matchmaking websites, link recommendation systems, bayesian spam filters and more) and it uses actually useful examples using real data (ebay API, facebook API etc are used to collect data). In one chapter, it even explains how you can draw graphs (I mean the data structure, not bar/line/etc graphs) optimally (so that no nodes are too close together, minimum overlapping lines etc), which could be useful for, for example, mapping social networks.
I would recommend having a look at it and see the different ways AI can be applied to web applications.
As a counter-example, parsing data acquired from water testing equipment would probably be a bad place to use artificial intelligence:
The Daily WTF: No, We Need a Neural Network
Just a reminder for all of us to choose the right tool for the right job.
Neural networks are great for working on images, so one area of web applications you could use AI for would be identifying and/or manipulating patterns in images over large sets of data. For example, a site like Flickr or Facebook might have some interesting training material to identify people based on face or associating groupings of pixels (those being the features you work with) with certain items mentioned in captions or tags.
In terms of text manipulation, there's a lot of stuff, but it's usually icing on the cake for other web apps. I'm talking mostly in the areas of automatic completion in search bars and back-end things the user doesn't usually see, like automatic machine translation or improved search capability.
The problem with putting AI at the front of an application's offering is that usually, artificial intelligence is not a feature in and of itself, but rather a way of negotiating large data sets effectively without regular prompts from the designer. In general, a user will associate with an application on a one-to-one basis, and therefore judges it only on the quality of a relatively low number of responses.
Email spam filtering systems - definitely.
Any other security applications which need to spot patterns for malicious stuff.
You probably could analyze the behavior of the visitors of your web applications ; how do they navigate inside the website to provide a better, optimized interface. Now it depends on what kind of web applications you're working on. For on line shopping you can come with suggestions extrapolated from customers habits.
You can also detect "abnormal" behavior and fraud. Fraud and bot detection can take advantage of AI.
Forecasting, of course.
It has immense value for businesses (i.e.: inventory optimization) and is especially valuable in the time of global crisis.
Games do need AI.
Expert systems too.
Outside of games, I've seen very few commercial uses of AI.
It could, in theory, be very useful in industrial robotics and imaging, but those fields also tend to be very conservative, and uncomfortable with non-deterministic algorithms.
You might want to research what iRobot does, but even them use rather simple algorithms in their commercial robots.
In the area of cognitive architectures (e.g. Soar, ACT-R, etc), rather than concentrating on algorithms like A* and games, researchers investigate models of human behavior including decision-making, cultural interchange and learning. They often focus on cognitive plausibility, i.e. how close does a model track what a human would do, including timing, etc.
These systems tend to be strictly research-based with limited commercial applications. So far anyway. Military applications, especially for training, are fairly common though.
Image Processing for detecting cancer! (We actually code IEEE papers about it, creating the algoritms is way harder than coding them so we write papers about the performance of other papers)
Risk assessment is a pretty good case for neural networks, mostly because they're pretty good at pattern matching. Insurance and credit companies use them to some degree for determining the risk of a customer.
I have done some extensive research on using Artificial Neural Networks for classification of underwater sound sources. The algorithm seemed to work quite well, especially that I devoted a big portion of the work on figuring out what combination of fourier transform coefficient composed the best set for the classification (with Principal Component Analysis).
Anything (seriously):
http://highlevellogic.blogspot.com/2010/09/high-level-logic-rethinking-software.html
The High Level Logic (HLL) Open Source project is about finding and coding high level logic under which all the other AI (and in fact, all programming) fits. There are serious concrete ideas and code. HLL is already an application framework.

Resources