Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm a bit confused about artificial intelligence.
I understand it as the capability of a machine to learn new things, or do different things without actually executing code (already written by someone).
In SO I see many threads about A.I. in games, but IMO that is not an A.I. Because if it is every software even a print command should be called A.I. In games there is just code that is executed. I would call it pseudo-AI.
Am I wrong? Should be also this considered as A.I.?
Wikipedia says this:
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents"
[1], where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
What you are considering is more specifically referred to as Machine Learning, which is indeed a subbranch of AI. As you can see from the second sentence above, however, the "AI" considered in games also fits perfectly well into this definition.
Of course, the actual line between what is AI, and what not, is quite blurry. This is also due to the fact, that everyone and his mother believes to know what "AI" means.
I suggest you grab yourself a more scientific book (say the classical Russel,Norvig) to get a more thorough grasp on the different fields that are present under the huge roof of what we simply refer to as "AI".
"Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task."
A more modern definition is to turn this on its head:
Artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human did not need to apply intelligence to accomplish the task.
Intelligence is the ability to do the things that don't require reasoning. Things like understanding and generating language, sequencing your leg muscles as you walk across the floor, or enjoying a symphony. You don't stop to reason for any of those things. You Understand. INTUITIVELY, how to interpret things in your visual field, language, and all other sensory input. And you can do the right thing without reasoning. You can easily prepare all of your breakfast without any reasoning. :-)
Doing things that "require thought" or reasoning, like playing chess or solving integrals are things that computers can already do.
This misunderstanding about what intelligence really is has cost us 60 years and a million man-years of banging our head against the wall.
Deep learning is the currently most popular expression of an alternative path to a "better kind of AI". Artificial Intuition is a special branch of Deep Learning tailored at understanding text.
The easiest way to know whether you are dealing with classical (futile) or modern AI is whether the system requires you to supply any models of the world (MOTW). Any MOTW means the AI is limited to operate in the domain specified by the MOTW and is therefore not a general intelligence. Also, anything with a MOTW is typically not designed to extend that model; this is a very difficult task.
Better to start from a Model of the Mind (MOTM) or a Model of Learning. These can be derived either from neuroscience (difficult) or from epistemology (much easier). A well done MOTM can then learn anything it needs to know to solve problems in any domain.
The main problem for most is to find what's called "a domain-independent method for determining saliency". In other words, all intelligences, natural or artificial, have to spend most of their time answering the question "what matters".
Google my name and "AI" for more.
Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task.
Frank and Kirt sum up the academic field of AI pretty well. Any difficulty there is defining AI reflects the more general problem of defining real intelligence. If AI has proved anything, it's that we have precious little idea what intelligence is, how amazing organisms are at solving problems, and how difficult it is to get machines to achieve the same results.
As for the use of the term AI in the video games industry, your confusion justified. The prospect of intelligent characters in games is so compelling, that the term long ago took on a life of its own as marketing jargon. However, AI is really just a poorly chosen name for the solving of problems that computers find hard and people find easy. And in that sense, there is plenty of genuine AI work going on in the games industry.
Take a look at AIGameDev.com for a taste of what is currently considered noteworthy in AI game development.
The most important aspect of AI as I believe is 'curiosity'. Intelligence comes from this very fact that it is a result of curiosity.
There is no precise definition of AI because intelligence itself is relative and hard to define, this is due to the fact that many fields (ancient and modern) such as Philosophy and Neuroscience serve as the foundations of AI. It depends on what your AI is expected to do.
Artificial Intelligence is the attempt to create intelligence from a computer program.
Regardless if its a toy program or neural science, as long as a program is able to mimic human problem-solving skills or even go beyond it, is called Artificial Intelligence.
Of course, the expectation of computer scientists on how capable a program (or machine) is to solve problems in time increases. Playing tic-tac-toe programs before is considered intelligent until chess programs where invented. Then now we are attempting to mimic how human brain through neural networks.
A.I for layman's now a day's is applied in most computer games. It's also used in most machines, like in airplane for autopilot, NASA's explorer on Mars called curiosity (2012), who's able to detect terrain obstacles and move around it.
Very tricky stuff A.I. Its like if you design a mind that replies to all the right questions with all the answers, is it A.I. Or just a talking encyclopedia. What if you can teach the A.I. by simply talking to it, do you then consider that A.I. with a mind, or again just a program. Perhaps the question or answer is if someone someday makes a machine that looks human, acts human, and thinks its human. And then see if others feel the same, that its human, if they dont know its not. And then what if it passes that test. You see its not really about is the machine conscious, or does it have a mind as those will never be truly answered. Its all about does it "seem conscious" act conscious, act like it has a mind, given thats as close as humanity will ever get to understanding that riddle. If a machine acts like it cares, and does helpful things thats all that really matters, not the rest of unseen picture. We just half to get this far in the first place. By the way check out Webo a teachable A.I.Webo a teachable A.I.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Our brain consists of billions of neurons which basically work with all the incoming data from our senses, handle our consciousness, emotions and creativity as well as our hormone system, etc.
So I'm completely new to this topic but doesn't each neuron have a fixed function? E.g.: If a signal of strength x enters, if the last signal was x ms ago, redirect it.
From what I've learned in biology about our nerves system which includes our brain because both consist of simple neurons, it seems to me as our brain is one big, complicated computer.
Maybe so complicated that things such as intelligence and cognition become possible?
As the most complicated things about a neuron pretty much are the chemical aspects on generating an electric singal, keeping itself alive, and eventually segmenting itself, it should be pretty easy emulating some on a computer, or?
You won't have to worry about keeping your virtual neuron alive, or?
If you can emulate a single neuron on a computer, which shouldn't be too hard, could you theoretically emulate more than 1000 billions of them, recreating intelligence, cognition and maybe even creativity?
In my question I'm leaving out the following aspects:
Speed of our current (super) computers
Actually writing a program for emulating neurons
I don't know much about this topic, please tell me if I got anything wrong :)
(My secret goal: Make a copy of my brain and store it on some 10 million TB HDD and make someone start it up in the future)
A neuron-like circuit can be built with a handful of transistors. Let's say it takes about a dozen transistors on average. (See http://diwww.epfl.ch/lami/team/vschaik/eap/neurons.html for an example.)
A brain-sized circuit would require 100 billion such neurons (more or less).
That's 1.2 trillion transistors.
A quad-core Itanium has 2 billion transistors.
You'd need a server rack with 600 quad-core processors to be brain-sized. Think $15M US to purchase the servers. You'll need power management and cooling plus real-estate to support this mess.
One significant issue in simulating the brain is scale. The actual brain only dissipates a few watts. Power consumption is 3 square meals per day. A pint of gin. Maintenance is 8 hours of downtime. Real estate is a 42-foot sailboat (22 Net Tons of volume as ships are measured) and a place to drop the hook.
A server cage with 600 quad-core processors uses a lot more energy, cooling and maintenance. It would require two full-time people to keep this "brain-sized" server farm running.
It seems simpler to just teach the two people what you know and skip the hardware investment.
Roger Penrose presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. If it's like that you can forget about building a brain with a computer...
Simulating a neuron is possible and therefore theoretically simulating a brain is possible.
The two things that always stump me as an issue is input and output though.
We have a very large number of nerve endings that all provide input to the brain. Without them the brain is useless. How can we simulate something as complicated as the human brain without also simulating the entire human body!?!
Output, once the brain has "dealt" with all of the inputs that it gets, what is then the output from it? How could you say that the "copy" of your brain was actually you without again hooking it up to a real human body that could speak and tell you?
All in all, a fascinating subject!!!!
The key problem with simulating neural networks (and human brain is a neural network) is that they function continuously, while digital computers function in cycles. So in a neural network different neurons function independently in parallel while in a computer you only simulate discrete system states.
That's why adequately simulating real neural networks is very problematic at the moment and we're very far from it.
Yes, the Blue Brain Project is getting close, and I believe Moore's Law has a $1000 computer getting there by 2049.
The main issue is that our brains are based largely on controlling a human body, which means that our language comprehension and production, the basis of our high-level reasoning and semantic object recognition, is strongly tied to its potential and practiced outputs to a larynx, tongue, and face muscles. Further, our reward systems are tied to signals that indicate sustenance and social approval, which are not the goals we generally want a brain-based AI to have.
An exact simulation of the human brain will be useful in studying the effects of drugs and other chemicals, but I think that the next steps will be in isolating pathways that let us do things that are hard for computers (e.g. visual system, fusiform gyrus, face recognition), and developing new or modifying known structures for representing concepts.
Short: yes we will surely be able to reproduce artificial brains, but no it maybe won't be with our current computers models (Turing machines), because we simply don't know yet enough about the brain to know if we need new computers (super-Turing or biologically engineered brains) or if current computers (with more power/storage space) are enough to simulate a whole brain.
Long:
Disclaimer: I am working in computational neuroscience research and I am interested both by the neurobiological side and the computational (artificial intelligence) side.
Most of the answers assume as true OP's postulate that simulating neurons is enough to save the whole brain state and thus simulate a whole brain.
That's not true.
The brain is more than just neurons.
First, there is the connectivity, the synapses, that is of paramount importance, even maybe more than neurons.
Secondly, there are glial cells such as astrocytes and oligodendrocytes that also possess their own connectivity and communication system.
Thirdly, neurons are heterogenous, which means that there is not just one template model of a neuron that we could just scale up to the required amount to simulate a brain, we also have to define multiple types of neurons and place them pertinently at the right places. Plus, the types can be continuous, so in fact you can have neurons that are half way between 3 different types...
Fourthly, we don't know much about the rules of brain's information processing and management. Sure, we discovered that the cerebellum works pretty much like an artificial neural network using stochastic gradient descent, and that the dopaminergic system works like TD-learning, but then we have no clue about the rest of the brain, even memory is out of reach (although we guess it's something close to a Hopfield network, but there's no precise model yet).
Fifthly, there are so many other examples from current research in neurobiology and computational neuroscience showing the complexity of brain's objects and networks dynamics that this list can go on and on.
So in the end, your question cannot be answered, because we simply do not know yet enough about the brain to know if our current computers (Turing machines) are enough to reproduce the complexity of biological brains to give rise to the full spectrum of cognitive functions.
However, biology field is getting closer and closer to computer science field, as you can see with biologically engineered viruses and cells that are programmed pretty much like you develop a computer program, and genetical therapies that basically re-engineer a living system based on its "class" template (the genome). So I dare to say that once we know enough about the brain's architecture and dynamics, the in-silico reproduction won't be an issue: if our current computers cannot reproduce the brain because of theoretical constraints, we will devise new computers. And if only biological systems can reproduce the brain, we will be able to program an artificial biological brain (we can already 3D-print functional bladders and skin and veins and hearts etc.).
So I would dare say (even if it can be controversial, this is here my own claim) that yes, artificial brains will surely be possible someday, but whether it will be as a Turing machine computer, a super-Turing computer or a biologically engineered brain remain to be seen depending on our progress in the knowledge of brain's mechanisms.
I don't think they are remotely close enough to understanding the human brain to even begin thinking about replicating it.
Scientists would have you think we are nearly there, but with regards to the brain we're not much further along than Dr. Frankenstein.
What is your goal? Do you want a program that can make intelligent decisions or a program that provides a realistic model of how the human brain actually works? Artificial intelligence can be approached from the perspective of psychology, where the goal is to simulate the brain and thereby get a better understanding of how humans think, or from the perspective of mathematics, optimization theory, decision theory, information theory, and computer science, in which case the goal is to create a program that is capable of making intelligent decisions in a computationally efficient manner. The latter, I would say is pretty much solved, although advances are definitely still being made. When it comes to a realistic simulation of the brain, I think we were only recently able to simulate a brain of cat semi-realistically; when it comes to humans, it would not be very computationally feasible at present.
Researchers far smarter than most recon so, see Blue Brain from IBM and others.
The Blue Brain Project is the first
comprehensive attempt to
reverse-engineer the mammalian brain,
in order to understand brain function
and dysfunction through detailed
simulations.
Theoretically the brain can be modeled using a computer (as software and hard/wetware are compatible or mutually expressible). The question isn't a theoretical one as far as computer science goes, but a philosophical one:
Can we model the (chaotic) way in which a brain develops. Is a brains power it's hardware or the environment that shapes the development and emergent properties of that hardware as it learns
Even more mental:
If I, with 100% accuracy modeled my own brain, then started the simulation. And that brain had my memories (as it has my brain's physical form) ... is it me? If not, what do I have that it doesn't?
I think that if we are ever in a position to emulate the brain, we should have been working on logical system based on biological principles with better applications than the brain itself.
We all have a brain, and we all have access to it's amazing power already ;)
A word of caution. Current projects on brain simulation work on a model of a human brain. Your idea about storing your mind on a hard-disk is crazy: if you want a replica of your mind you'll need two things. First, another "blank" brain. Second, devise a method to perfectly transfer all the information contained in your brain: down to the quantum states of every atom in it.
Good luck with that :)
EDIT: The dog ate part of my text.
Out of curiosity, I've been reading up a bit on the field of Machine Learning, and I'm surprised at the amount of computation and mathematics involved. One book I'm reading through uses advanced concepts such as Ring Theory and PDEs (note: the only thing I know about PDEs is that they use that funny looking character). This strikes me as odd considering that mathematics itself is a hard thing to "learn."
Are there any branches of Machine Learning that use different approaches?
I would think that a approaches relying more on logic, memory, construction of unfounded assumptions, and over-generalizations would be a better way to go, since that seems more like the way animals think. Animals don't (explicitly) calculate probabilities and statistics; at least as far as I know.
The behaviour of the neurons in our brains is very complex, and requires some heavy duty math to model. So, yes we do calculate extremely complex math, but it's done in a way that we don't perceive.
I don't know whether the math you typically find in A.I. research is entirely due to the complexity of the natural neural systems being modelled. It may also be due, in part, to heuristic techniques that don't even attempt to model the mind (e.g., using convolution filters to recognise shapes).
If you want to avoid the math but do AI like stuff, you can always stick to simpler models. In 90% of the time, the simpler models will be good enough for real world problems.
I don't know of a track of AI that is completely decoupled from math though. Probability theory is the tool for handling uncertainty which plays a major role in AI. So even if there was not-so-mathematical subfield, math techniques would most be a way to improve on those methods. And thus the mathematics would be back in game. Even simple techniques like the naive Bayes and decision trees can be used without a lot of math, but the real understanding comes only through it.
Machine learning is very math heavy. It is sometimes said to be close to "computational statistics", with a little more focus on the computational side. You might want to check out "Collective Intelligence" by O'Reilly, though. I hear they have a good collection of ML techniques without math too hard.
You might find evolutionary computing approaches to machine learning a little less front-loaded with heavy-duty maths, approaches such as ant-colony optimisation or swarm intelligence.
I think you should put to one side, if you hold it as your question kind of suggests you do, the view that machine learning is trying to simulate what goes on in the brains of animals (including Homo Sapiens). A lot of the maths in modern machine learning arises from its basis in pattern recognition and matching; some of it comes from attempts to represent what is learnt as quasi-mathematical statements; some of it comes from the need to use statistical methods to compare different algorithms and approaches. And some of it comes because some of the leading practitioners come from scientific and mathematical backgrounds rather than computer science backgrounds, and they bring their toolset with them when they come.
And I'm very surprised that you are suprised that machine learning involves a lot of computation since the long history of AI has proven that it is extremely difficult to build machines which (seem to) think.
I've been thinking about this kind of stuff a lot lately.
Unfortunately, most engineer/mathematician types are so tied to their own familiar mathematical/computational worlds, they often forget to consider other paradigms.
Artists, for example, often think of the world in a very fluid way, usually untethered by mathematical models. Much of what happens in art is archetypal or symbolic, and often doesn't follow any seemingly conventional logical arrangement. There are, of course, very strong exceptions to this. Music, for instance, especially in music theory, often requires strong left brained processes and so forth. In truth, I would argue that even the most right brained activities are not devoid of left logic, but rather are more complex mathematical paradigms, like chaos theory is to the beauty of fractals. So the cross-over from left to right and back again is not a schism, but a symbiotic coupling. Humans utilize both sides of the brain.
Lately I've been thinking about a more artful representational approach to math and machine language -- even in a banal world of ones and zeroes. The world has been thinking about machine language in terms of familiar mathematical, numeric, and alphabetic conventions for a fairly long time now, and it's not exactly easy to realign the cosmos. Yet in a way, it happens naturally. Wikis, wysisygs, drafting tools, photo and sound editors, blogging tools, and so forth, all these tools do the heavy mathematical and machine code lifting behind the scenes to make for a more artful end experience for the user.
But we rarely think of doing the same lifting for coders themselves. To be sure, code is symbolic, by its very nature, lingual. But I think it is possible to turn the whole thing on its head, and adopt a visual approach. What this would look like is anyone's guess, but in a way we see it everywhere as the whole world of machine learning is abstracted more and more over time. As machines become more and more complex and can do more and more sophisticated things, there is a basic necessity to abstract and simplify those very processes, for ease of use, design, architecture, development, and...you name it.
That all said, I do not believe machines will ever learn anything on their own without human input. But that is another debate, as to the character of religion, God, science, and the universe.
I attended a course in machine-learning last semester. The cognitive science chair at our university is very interested in symbolic machine learning (That's the stuff without mathematics or statistics ;o)). I can recommend two outstanding textbooks:
Machine Learning (Thomas Mitchell)
Artificial Intelligence: A Modern Approach (Russel and Norvig)
The first one is more focused on machine learning, but its very compact has got a very gentle learning curve. The second one is a very interesting read with many historical informations.
These two titles should give you a good overview (All aspects of machine learning not just symbolic approaches), so that you can decide for yourself which aspect you want to focus on.
Basically there is always mathematics involved but I find symbolic machine learning easier to start with because the ideas behind most approaches are often amazingly simple.
Mathematics is simply a tool in machine learning. Knowing the maths enables one to efficiently approach the modelled problems at hand. Of course it might be possible to brute force one's way through, but usually this would come with the expense of lessened understanding of the basic principles involved.
So, pick up a maths book, study the topics until it you're familiar with the concepts. No mechanical engineer is going to design a bridge without understanding the basic maths behind the system behaviour; why should this be any different in the area of machine learning?
There is a lot of stuff in Machine Learning, outside just the math..
You can build the most amazing probabilistic model using a ton of math, but fail because you aren't extracting the right features from the data (which might often require domain insight) or are having trouble figuring out what your model is failing on a particular dataset (which requires you to have a high-level understanding of what the data is giving, and what the model needs).
Without the math, you cannot build new complicated ML models by yourself, but you sure can play with existing tried-and-tested ones to analyze information and do cool things.
You still need some math knowledge to interpret the results the model gives you, but this is usually a lot easier than having to build these models on your own.
Try playing with http://www.cs.waikato.ac.nz/ml/weka/ and http://mallet.cs.umass.edu/ .. The former comes with all the standard ML algorithms along with a lot of amazing features that enable you to visualize your data and pre/post-process it to get good results.
Yes, machine learning research is now dominated by researchers trying to solve the classification problem: given positive/negative examples in an n-dimensional space, what is the best n-dimensional shape that captures the positive ones.
Another approach is taken by case-based reasoning (or case-based learning) where deduction is used alongside induction. The idea is that your program starts with a lot of knowledge about the world (say, it understands Newtonian physics) and then you show it some positive examples of the desired behavior (say, here is how the robot should kick the ball under these circumstances) then the program uses these together to extrapolate the desired behavior to all circumstances. Sort of...
firstly cased based AI, symbolic AI are all theories.. There are very few projects that have employed them in a sucessfull manner. Nowadays AI is Machine Learning. And even neural nets are also a core element in ML, which uses a gradient based optimization. U wanna do Machine learning, Linear Algebra, Optimization, etc is a must..
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
In my career I've come across two broad types of theory: physical theories and educational/management theories:
Physical theories are either correct (under appropriate conditions) or incorrect, as judged by the physical world.
Educational/management theories have the appearance of being like physical theories, but they lack rigorous testing. At best they give new ways of thinking about problems. Multiple theories are useful because one of them may speak to you in the right way.
As an hobbyist student of software engineering there appear to be a lot of theories of software engineering (such agile programming, test driven design, patterns, extreme programming). Should I consider these theories to be physical-like or educational/management-like?
Or have I mis-understood software engineering and find myself being in the position of "not even wrong"?
Software engineering is ultimately about psychology, how humans manage complexity. So software engineering principles are far more like education and management theories than physical principles.
Some software engineering has solid math behind it: O(n log n) sorts are faster than O(n^2) sorts, etc. But mostly software engineering is about how humans think about software. How to organize things so that maintainers don't go crazy, anticipating what is likely to change and what is not, preventing and detecting human errors, etc. It's a branch of psychology or sociology.
I think the appropriate theoretical split is those "harder" sciences (where there can be proofs) and the softer topics with qualitative answers and few proofs if any.
Software to me is mostly about language and communication, a topic that is qualitative and subjective mostly. Every now and then we touch into algorithms and other "hard" areas, where proofs and rigorous formalisms exist. So, yes, both please.
Not even wrong.
All the software engineering "theories" seem to be nothing but advice on particular things to try to see if they make you and your team more productive. Even if one could set them up to be falsifiable like scientific theories, there would be very little point to it. That is not to say that it is not worthwhile to learn about them -- au contraire, you should become familiar with as many of them as possible and try to figure out in what kinds of teams and environment they may work better. But be careful: avoid dogma and thinking there are silver bullets.
i wouldn't call agile programming, test driven design, patterns, extreme programming, etc 'theories', they're methodologies, or work styles. they don't make any assertions.
Generally the field of Informatics is divided into 4 areas (need to find a link to the source, SWEBOK?), which are distinct although related and interconnected:
Computer Science
Software Engineering
Computer Engineering
Information Systems
There is a good analysis of engineering vs. science in Steve McConnel's "Professional Software Development". Check out his Software Engineering, Not Computer Science.
Software development is more about engineering - finding practical solutions to practical problems - than anything else. That is correct that software engineering relies on computer science, mathematics, complexity theory, systematics, psychology, and other disciplines, but it can not be equated to any of them, nor is it a batch of any of them.
Besides theories, there are also frameworks, models and rules of thumb. Ideas, sure, but based on a less rigorous foundation, which loosely belong to your eduction/management category.
Computer Science has some strong foundational theories (physical ones by your definition), but these mostly consist of tying together the smaller elements.
Software Engineering on the other hand, is a relatively new discipline that involves utilizing computers and occasionally Computer Science to build software systems. Most of the practice in that arena is entirely based on non-rigorous experimental and anecdotal evidence. Since the jury is still out on even the simplest of issues, most of what passes for practices could be best described as pure guess-work and irrational preference. It's one of those disciplines where you really do have to know a lot to realize how much is built on a house of very unstable cards.
Paul.
Being intangible, programming is a very difficult activity to relate to another human being, even other programmers. Software engineering tries to add structure where there is none, but such structure is not rooted in the inevitability of reality. So all these approaches become like religions in how groups of people behave when trying to appease their technical gods (or demons).
All these theories and best practices still haven't brought us to the point where we can produce software systems reliably and predictably. The newest of these surveys is dated 2001; Jeff's column from 2006 still laments high failure rates.
It'd be interesting to see if anybody's working on an updated survey.
Avionics and the software running my car don't seem to fail at anything close to the rates quoted for enterprise software. Why don't enterprise developers follow their practices more closely? Maybe we should all be writing Ada....[just kidding]
They're like recipes: they're guidelines, whose success depends:
Partly, on the quality of the recipe
Partly, on the quality of the ingredients
Partly, on the skill of (and time available to) the practitioners
For me, it's my own theory with many of the others used as a base. I don't know any one that uses a single specific theory. And that's not a cop out answer.
Just as there are different languages, theories/practices/methodologies are to be used in distinct situations. The structure, rules, and definitions are all the ways in which people understand how things are to be accomplished, but what is to be accomplished is subjective.
Adapt, knowing the agile, extreme, or other methods at the discretion of the client, project, programmer, time, and especially what makes you successful/happy. Be a team and adjust/adapt to what your team is doing for the greater good; just keep in mind to have something that you have defined in your own mind, or it's not just chaos.
[SOAPBOX]
I started programming on the Atari 400 with a converted flat keyboard and 64K upgrade. When I started college, it was VB 1.0 which I saw my Economics Teacher use to build a teaching tool to help people learn more about economics using graphs and visual inputs. That was cool! And I knew I could do that.
This same Economics Teacher, who later become an IT teacher too (he was good), asked if I would teach a class on debugging. He said, "I haven't met someone that understands the concepts and has a natural ability to debug as fast as you do, would you teach us what you know and how you do it." This was a boost in my ego, of course, but to teach, mentor, and help others.
Every one of those instances has fuled my desire to help other people. For me, I want a computer to do exactly what I want, to help other people in the business and home life to increase their qualify of living, learn more, and get more done.
Someone said to me one time, "You're only as good as your tools". Learn, practice, and grow.
If you've defined something, it's working, has order, and it stretches you and the boundaries, you're not wrong.
Is there a think like "software engineering"?
Or Is software development is "engineering"?
Facts:
Our industry is very young realtive to many other "engineerings".
We still does not have "solid" practices and "theories".
So to be honest, if we look in the other mature engineering practices perpective, it is hard to call what we do as "engineering"
We have a very bad reputation at failing [ our failing rates can not be acceptable at many engineering branches]
Reality or Fantasy? Pick one :-)
Some guys say that we do not have "solid" paractices and "theories" since we are a young "engineering" branch, by time we will have.Those guys say that wee need to work more "theory" or foundations.
Some guys say that software develepment is "experimental social activity" because of the nature of our problem domain. We will have practices theories methodologies processes but they will always have second order effect. The unique people, their feelings qualities and and their interactions with the rest are more influential. Those guys see software development as Complex Adaptive System
And there is also another reality
%80 of software development activities really do not need very brilland mind. Any "average" person can do it.
But remaining %20 part is ver hard and multidiciplinary task.
Even there is an another new perspective My One:-)
This view say that software development is not a branch of "Engineering". It is brach of "Natural Sciences and Social Sciences". So we need Software Anthropology and Anthropologist.
Theory: I think a theory is anything that describes "how" a natural system works, and in order to prove it, has logical deductions based on previous knowledge, substantiated by logical inductions made using experiments.
You call the whole body of these theories and experiments as Science.
Software: Software is a man-made system aka. an engineered system. Engineering applies Science in order to create the new systems. In that regards, pure Software Engineering applies the science of discrete mathematical systems.
But Commercial Software Engineering has a different motivation called Economics.
In that regards, it has to take all the factors into account that affect Economics, the chief of them being People. So, Psychology plays a huge part .
But, since Psychology itself is just a theory of "how" human mind works based on just pattern recognition without any logical deductions based on human biology, it has many flaws like correlation implies causation.
So, yeah, I think from the above answer, you can better understand what Commercial Software Engineering in total is .
As a computer software expert witness, I am required to analyze a huge range of different software technologies. During my deposition or trial testimony, the opposing expert may direct questions targeted at exposing or revealing my weaknesses. There is no time for research or education.
Given that I can't be an expert in every technology, what are the most versatile and transferrable skills or technologies I should learn?
I will start with the obvious:
Databases are omnipresent (but which are the best archetypes?)
C is often involved due to the prevalence of older Windows and DOS based systems
What should be added this list?
I may be mis-reading your question, but I suspect that if you are being called upon as an expert witness, you already have the expertise they are seeking; I suppose that learning more technical aspects of any technology would make you more likely to become an expert witness, but ultimately I would recommend the best skill would be of truthfulness. If you don't know, say so. Any unknown questions can then become the "to be studied" list for later review.
just my 2 cents ...
It would be silly to call you as an expert witness if you cannot be an expert in the line of questioning.
Well, the big thing about being a witness is to listen to the counsel for which you are testifying. In the computer world, your credibility is not easily impugned. If they were to try to do so, it would be by calling into question formal education or training as insufficient to be an expert. They won't be asking you to explain what a Turing Machine is, or how to write a sorting algorithm in LISP, unless it is directly relevant to the matter at hand. They won't be playing "Gotcha!" with difficult technical questions, as it won't resonate with the judge/jury .How many jury members can you picture saying this: "I can't BELIEVE that "expert" doesn't understand database normalization! what a fraud!"? If the jury doesn't understand the question, they won't understand the answer. Any 1st year law student will tell you all about this problem (it comes up it all kinds of expert testimony situations).
No, your credibility will be questioned in laymen's terms. If you are being asked to testify, it's because you have the answers that are relevant. Stick to those and don't do any tricks (as your counsel will tell you), and you'll be fine. If your information is correct, and your degree/experience is solid, you may not even be cross-examined (they will just find their own expert to say the opposite of what you said).
Computer software expert witnesses need to also have a good understanding of networking technology and be able to explain it to a jury or judge. Because a great deal of software is client/server based, being able to explain the way firewalls, ip address, http, internet routers works and why you can tell that certain pieces of software were definitely used at certain times and locations is important.
Being familiar with server operating systems and the log files they generate is also helpful.
I would say forget learning new technology outside understanding industry concepts and how they're really applied in the real world. The key thing you need to be able to do as an expert witness is explain these concepts in terms that can be easily understood by the layman. You already know this stuff or you wouldn't be the expert witness. You're there because your name and reputation are thought well of and they [prosecution/defence] need your help.
I think of it like this: The lawyer/barrister/attorney's job is to sell their vision of the truth and get the jury to buy into their vision [skewed as that vision may or may not be]. Your job is to sell the facts. Either the two are one and the same, or they aren't. Sell the facts to the best of your ability, if you have easily understood examples [by easily understood, I mean by an 8 year old], all the better.
Key concepts I would think would be software systems that people will use/exploit to either commit or to cover up a crime:
Networking systems: Common protocols, packet tracing etc.
Firewall systems and common exploits.
Viruses and replication: Worms vs. Trojans etc.
Major Operating Systems: Basic concepts and common exploits.
Web Applications: How they're structured and how they can be exploited.
Common hacking concepts: DoS, OOB, SQL Injection etc.
Email concepts: transmission, receipt, tracking, header information.
Data storage and recovery concepts and key software.
Surveillance Techniques: Packet analysis, key loggers etc.
I'm sure there are a few others, but no others immediately spring to mind
Definitely learn about email systems. I'd imagine email communications come into play fairly often in court cases these days. Learn how SMTP and POP3 work. Learn the basics of email servers and what ways they can be manipulated and how difficult it is to do.
i think you're deceiving yourself, what is a "computer software expert witness"?. That's like saying, you're an electrical engineer, and so you have the capacity to answer any engineering questions, whether they're from chemical, mechanical, civil or other specific area of engineering.
As I learn more about Computer Science, AI, and Neural Networks, I am continually amazed by the cool things a computer can do and learn. I've been fascinated by projects new and old, and I'm curios of the interesting projects/applications other SO users have run into.
The Numenta Platform for Intelligent Computing. They are implementing the type of neuron described in "On Intelligence" by Jeff Hawkins. For an idea of the significance, they are working on software neurons that can visually recognize objects in about 200 steps instead of the thousands and thousands necessary now.
Edit: Apparently version 1.6.1 of the SDK is available now. Exciting times for learning software!!
This isn't AI itself, but OpenCyc (and probably it's commercial big brother, Cyc) could provide the "common sense" AI applications need to really understand the world in which they exist.
For example, Cyc could provide the enough general knowledge that it could begin to "read" and reason about encyclopedic content such as Wikipedia, or surf the "Semantic Web" acting as an agent to develop some domain-specific knowledge base.
w:
Arthur L. Samuel (1901 – July 29,
1990) was a pioneer in the field of
computer gaming and artificial
intelligence. The Samuel
Checkers-playing Program appears to be
the world's first self-learning
program...
Samuel designed various
mechanisms by which his program could
become better. In what he called rote
learning, the program remembered every
position it had already seen, along
with the terminal value of the reward
function. This technique effectively
extended the search depth at each of
these positions. Samuel's later
programs reevaluated the reward
function based on input professional
games. He also had it play thousands
of games against itself as another way
of learning. With all of this work,
Samuel’s program reached a respectable
amateur status, and was the first to
play any board game at this high of
level.
Samuel: Some Studies in Machine Learning Using the Game of Checkers (21 page pdf file). Singularity is near! :)
One of my own favorites is Donald Michie's 1960, Project: MENACE - Matchbox Educable Naughts and Crosses Engine. In this project Michie used a collection of matchboxes with colored beads that he taught to play Tic-Tac-Toe. This was to demonstrate that machines could in some sense learn from their previous successes and failures.
More information as well as a computer simulation of the experiment are here: http://www.adit.co.uk/html/menace_simulation.html
http://alice.pandorabots.com/
- This bot is able to have pretty intelligent conversation with us.
http://www.triumphpc.com/johnlennon/
recreating the personality and thoughts of John Lennon.. you can have a chat with him on this site.
http://AngelCog.org is quite interesting. The project is based around the idea that to make a true AI, you must do it in three stages:
1) Try to process logic in general, and be able to describe anything.
2) Logically process code, and process "Stories" about the real world.
3) Logically process it's own code, and talk to people.
The project is based around the idea, that once a program is logically processing it's own code, it is already an AI. Of course it also needs to be able to understand the "Real world". That's the "other half".
As far as I'm aware, no one else has a project based on the assumption that to make a proper AI, the AI must understand the language in which it is written. So lets say an AI is written in C++. Well then it must master C++ and be able to read and write and alter C++ programs, especially itself!!
It's still a "toy" right now however, and is still in the "First stage" of development. ("Try to process logic in general, and be able to describe anything."). But the developer is looking for help.