Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
In my career I've come across two broad types of theory: physical theories and educational/management theories:
Physical theories are either correct (under appropriate conditions) or incorrect, as judged by the physical world.
Educational/management theories have the appearance of being like physical theories, but they lack rigorous testing. At best they give new ways of thinking about problems. Multiple theories are useful because one of them may speak to you in the right way.
As an hobbyist student of software engineering there appear to be a lot of theories of software engineering (such agile programming, test driven design, patterns, extreme programming). Should I consider these theories to be physical-like or educational/management-like?
Or have I mis-understood software engineering and find myself being in the position of "not even wrong"?
Software engineering is ultimately about psychology, how humans manage complexity. So software engineering principles are far more like education and management theories than physical principles.
Some software engineering has solid math behind it: O(n log n) sorts are faster than O(n^2) sorts, etc. But mostly software engineering is about how humans think about software. How to organize things so that maintainers don't go crazy, anticipating what is likely to change and what is not, preventing and detecting human errors, etc. It's a branch of psychology or sociology.
I think the appropriate theoretical split is those "harder" sciences (where there can be proofs) and the softer topics with qualitative answers and few proofs if any.
Software to me is mostly about language and communication, a topic that is qualitative and subjective mostly. Every now and then we touch into algorithms and other "hard" areas, where proofs and rigorous formalisms exist. So, yes, both please.
Not even wrong.
All the software engineering "theories" seem to be nothing but advice on particular things to try to see if they make you and your team more productive. Even if one could set them up to be falsifiable like scientific theories, there would be very little point to it. That is not to say that it is not worthwhile to learn about them -- au contraire, you should become familiar with as many of them as possible and try to figure out in what kinds of teams and environment they may work better. But be careful: avoid dogma and thinking there are silver bullets.
i wouldn't call agile programming, test driven design, patterns, extreme programming, etc 'theories', they're methodologies, or work styles. they don't make any assertions.
Generally the field of Informatics is divided into 4 areas (need to find a link to the source, SWEBOK?), which are distinct although related and interconnected:
Computer Science
Software Engineering
Computer Engineering
Information Systems
There is a good analysis of engineering vs. science in Steve McConnel's "Professional Software Development". Check out his Software Engineering, Not Computer Science.
Software development is more about engineering - finding practical solutions to practical problems - than anything else. That is correct that software engineering relies on computer science, mathematics, complexity theory, systematics, psychology, and other disciplines, but it can not be equated to any of them, nor is it a batch of any of them.
Besides theories, there are also frameworks, models and rules of thumb. Ideas, sure, but based on a less rigorous foundation, which loosely belong to your eduction/management category.
Computer Science has some strong foundational theories (physical ones by your definition), but these mostly consist of tying together the smaller elements.
Software Engineering on the other hand, is a relatively new discipline that involves utilizing computers and occasionally Computer Science to build software systems. Most of the practice in that arena is entirely based on non-rigorous experimental and anecdotal evidence. Since the jury is still out on even the simplest of issues, most of what passes for practices could be best described as pure guess-work and irrational preference. It's one of those disciplines where you really do have to know a lot to realize how much is built on a house of very unstable cards.
Paul.
Being intangible, programming is a very difficult activity to relate to another human being, even other programmers. Software engineering tries to add structure where there is none, but such structure is not rooted in the inevitability of reality. So all these approaches become like religions in how groups of people behave when trying to appease their technical gods (or demons).
All these theories and best practices still haven't brought us to the point where we can produce software systems reliably and predictably. The newest of these surveys is dated 2001; Jeff's column from 2006 still laments high failure rates.
It'd be interesting to see if anybody's working on an updated survey.
Avionics and the software running my car don't seem to fail at anything close to the rates quoted for enterprise software. Why don't enterprise developers follow their practices more closely? Maybe we should all be writing Ada....[just kidding]
They're like recipes: they're guidelines, whose success depends:
Partly, on the quality of the recipe
Partly, on the quality of the ingredients
Partly, on the skill of (and time available to) the practitioners
For me, it's my own theory with many of the others used as a base. I don't know any one that uses a single specific theory. And that's not a cop out answer.
Just as there are different languages, theories/practices/methodologies are to be used in distinct situations. The structure, rules, and definitions are all the ways in which people understand how things are to be accomplished, but what is to be accomplished is subjective.
Adapt, knowing the agile, extreme, or other methods at the discretion of the client, project, programmer, time, and especially what makes you successful/happy. Be a team and adjust/adapt to what your team is doing for the greater good; just keep in mind to have something that you have defined in your own mind, or it's not just chaos.
[SOAPBOX]
I started programming on the Atari 400 with a converted flat keyboard and 64K upgrade. When I started college, it was VB 1.0 which I saw my Economics Teacher use to build a teaching tool to help people learn more about economics using graphs and visual inputs. That was cool! And I knew I could do that.
This same Economics Teacher, who later become an IT teacher too (he was good), asked if I would teach a class on debugging. He said, "I haven't met someone that understands the concepts and has a natural ability to debug as fast as you do, would you teach us what you know and how you do it." This was a boost in my ego, of course, but to teach, mentor, and help others.
Every one of those instances has fuled my desire to help other people. For me, I want a computer to do exactly what I want, to help other people in the business and home life to increase their qualify of living, learn more, and get more done.
Someone said to me one time, "You're only as good as your tools". Learn, practice, and grow.
If you've defined something, it's working, has order, and it stretches you and the boundaries, you're not wrong.
Is there a think like "software engineering"?
Or Is software development is "engineering"?
Facts:
Our industry is very young realtive to many other "engineerings".
We still does not have "solid" practices and "theories".
So to be honest, if we look in the other mature engineering practices perpective, it is hard to call what we do as "engineering"
We have a very bad reputation at failing [ our failing rates can not be acceptable at many engineering branches]
Reality or Fantasy? Pick one :-)
Some guys say that we do not have "solid" paractices and "theories" since we are a young "engineering" branch, by time we will have.Those guys say that wee need to work more "theory" or foundations.
Some guys say that software develepment is "experimental social activity" because of the nature of our problem domain. We will have practices theories methodologies processes but they will always have second order effect. The unique people, their feelings qualities and and their interactions with the rest are more influential. Those guys see software development as Complex Adaptive System
And there is also another reality
%80 of software development activities really do not need very brilland mind. Any "average" person can do it.
But remaining %20 part is ver hard and multidiciplinary task.
Even there is an another new perspective My One:-)
This view say that software development is not a branch of "Engineering". It is brach of "Natural Sciences and Social Sciences". So we need Software Anthropology and Anthropologist.
Theory: I think a theory is anything that describes "how" a natural system works, and in order to prove it, has logical deductions based on previous knowledge, substantiated by logical inductions made using experiments.
You call the whole body of these theories and experiments as Science.
Software: Software is a man-made system aka. an engineered system. Engineering applies Science in order to create the new systems. In that regards, pure Software Engineering applies the science of discrete mathematical systems.
But Commercial Software Engineering has a different motivation called Economics.
In that regards, it has to take all the factors into account that affect Economics, the chief of them being People. So, Psychology plays a huge part .
But, since Psychology itself is just a theory of "how" human mind works based on just pattern recognition without any logical deductions based on human biology, it has many flaws like correlation implies causation.
So, yeah, I think from the above answer, you can better understand what Commercial Software Engineering in total is .
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm a bit confused about artificial intelligence.
I understand it as the capability of a machine to learn new things, or do different things without actually executing code (already written by someone).
In SO I see many threads about A.I. in games, but IMO that is not an A.I. Because if it is every software even a print command should be called A.I. In games there is just code that is executed. I would call it pseudo-AI.
Am I wrong? Should be also this considered as A.I.?
Wikipedia says this:
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents"
[1], where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
What you are considering is more specifically referred to as Machine Learning, which is indeed a subbranch of AI. As you can see from the second sentence above, however, the "AI" considered in games also fits perfectly well into this definition.
Of course, the actual line between what is AI, and what not, is quite blurry. This is also due to the fact, that everyone and his mother believes to know what "AI" means.
I suggest you grab yourself a more scientific book (say the classical Russel,Norvig) to get a more thorough grasp on the different fields that are present under the huge roof of what we simply refer to as "AI".
"Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task."
A more modern definition is to turn this on its head:
Artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human did not need to apply intelligence to accomplish the task.
Intelligence is the ability to do the things that don't require reasoning. Things like understanding and generating language, sequencing your leg muscles as you walk across the floor, or enjoying a symphony. You don't stop to reason for any of those things. You Understand. INTUITIVELY, how to interpret things in your visual field, language, and all other sensory input. And you can do the right thing without reasoning. You can easily prepare all of your breakfast without any reasoning. :-)
Doing things that "require thought" or reasoning, like playing chess or solving integrals are things that computers can already do.
This misunderstanding about what intelligence really is has cost us 60 years and a million man-years of banging our head against the wall.
Deep learning is the currently most popular expression of an alternative path to a "better kind of AI". Artificial Intuition is a special branch of Deep Learning tailored at understanding text.
The easiest way to know whether you are dealing with classical (futile) or modern AI is whether the system requires you to supply any models of the world (MOTW). Any MOTW means the AI is limited to operate in the domain specified by the MOTW and is therefore not a general intelligence. Also, anything with a MOTW is typically not designed to extend that model; this is a very difficult task.
Better to start from a Model of the Mind (MOTM) or a Model of Learning. These can be derived either from neuroscience (difficult) or from epistemology (much easier). A well done MOTM can then learn anything it needs to know to solve problems in any domain.
The main problem for most is to find what's called "a domain-independent method for determining saliency". In other words, all intelligences, natural or artificial, have to spend most of their time answering the question "what matters".
Google my name and "AI" for more.
Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task.
Frank and Kirt sum up the academic field of AI pretty well. Any difficulty there is defining AI reflects the more general problem of defining real intelligence. If AI has proved anything, it's that we have precious little idea what intelligence is, how amazing organisms are at solving problems, and how difficult it is to get machines to achieve the same results.
As for the use of the term AI in the video games industry, your confusion justified. The prospect of intelligent characters in games is so compelling, that the term long ago took on a life of its own as marketing jargon. However, AI is really just a poorly chosen name for the solving of problems that computers find hard and people find easy. And in that sense, there is plenty of genuine AI work going on in the games industry.
Take a look at AIGameDev.com for a taste of what is currently considered noteworthy in AI game development.
The most important aspect of AI as I believe is 'curiosity'. Intelligence comes from this very fact that it is a result of curiosity.
There is no precise definition of AI because intelligence itself is relative and hard to define, this is due to the fact that many fields (ancient and modern) such as Philosophy and Neuroscience serve as the foundations of AI. It depends on what your AI is expected to do.
Artificial Intelligence is the attempt to create intelligence from a computer program.
Regardless if its a toy program or neural science, as long as a program is able to mimic human problem-solving skills or even go beyond it, is called Artificial Intelligence.
Of course, the expectation of computer scientists on how capable a program (or machine) is to solve problems in time increases. Playing tic-tac-toe programs before is considered intelligent until chess programs where invented. Then now we are attempting to mimic how human brain through neural networks.
A.I for layman's now a day's is applied in most computer games. It's also used in most machines, like in airplane for autopilot, NASA's explorer on Mars called curiosity (2012), who's able to detect terrain obstacles and move around it.
Very tricky stuff A.I. Its like if you design a mind that replies to all the right questions with all the answers, is it A.I. Or just a talking encyclopedia. What if you can teach the A.I. by simply talking to it, do you then consider that A.I. with a mind, or again just a program. Perhaps the question or answer is if someone someday makes a machine that looks human, acts human, and thinks its human. And then see if others feel the same, that its human, if they dont know its not. And then what if it passes that test. You see its not really about is the machine conscious, or does it have a mind as those will never be truly answered. Its all about does it "seem conscious" act conscious, act like it has a mind, given thats as close as humanity will ever get to understanding that riddle. If a machine acts like it cares, and does helpful things thats all that really matters, not the rest of unseen picture. We just half to get this far in the first place. By the way check out Webo a teachable A.I.Webo a teachable A.I.
I'm starting to study machine learning and bayesian inference applied to computer vision and affective computing.
If I understand right, there is a big discussion between
classical IA, ontology, semantic web researchers
and machine learning and bayesian guys
I think it is usually referred as strong AI vs weak AI related also to philosophical issues like functional psychology (brain as black box set) and cognitive psychology (theory of mind, mirror neuron), but this is not the point in a programming forum like this.
I'd like to understand the differences between the two points of view. Ideally, answers will reference examples and academic papers where one approach get good results and the other fails. I am also interested in the historical trends: why approaches fell out of favour and a newer approaches began to rise up. For example, I know that Bayesian inference is computationally intractable, problem in NP, and that's why for a long time probabilistic models was not favoured in information technology world. However, they've began to rise up in econometrics.
I think you have got several ideas mixed up together. It's true that there is a distinction that gets drawn between rule-based and probabilistic approaches to 'AI' tasks, however it has nothing to do with strong or weak AI, very little to do with psychology and it's not nearly as clear cut as being a battle between two opposing sides. Also, I think saying Bayesian inference was not used in computer science because inference is NP complete in general is a bit misleading. That result often doesn't matter that much in practice and most machine learning algorithms don't do real Bayesian inference anyway.
Having said all that, the history of Natural Language Processing went from rule-based systems in the 80s and early 90s to machine learning systems up to the present day. Look at the history of the MUC conferences to see the early approaches to information extraction task. Compare that with the current state-of-the-art in named entity recognition and parsing (the ACL wiki is a good source for this) which are all based on machine learning methods.
As far as specific references, I doubt you'll find anyone writing an academic paper that says 'statistical systems are better than rule-based systems' because it's often very hard to make a definite statement like that. A quick Google for 'statistical vs. rule based' yields papers like this which looks at machine translation and recommends using both approaches, according to their strengths and weaknesses. I think you'll find that this is pretty typical of academic papers. The only thing I've read that really makes a stand on the issue is 'The Unreasonable Effectiveness of Data' which is a good read.
As for the "rule-based" vs. " probabilistic" thing you can go for the classic book by Judea Pearl - "Probabilistic Reasoning in Intelligent Systems. Pearl writes very biased towards what he calls "intensional systems" which is basically the counter-part to rule-based stuff. I think this book is what set off the whole probabilistic thing in AI (you can also argue the time was due, but then it was THE book of that time).
I think machine-learning is a different story (though it's nearer to probabilistic AI than to logics).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I am a Computer Science student. I want to do an AI project for my 4th year with two other students. (It's a 5-year degree in my university so I can pursue the same project for two consecutive years if I want to). Our knowledge in AI is very basic at this moment since we'll be specializing in it these coming two years, so a very advanced idea will probably be hard to accomplish. We're not expected to research new untouched soils either, so the more resources the better.
I'm interested in ideas that can benefit people and not just applying algorithms and techniques. I want to do a masters after graduation, but I'm not sure in what field yet.
I'd love to do a medical application or a project that of some use to the handicapped.
Some projects that were already pursued at the university included a project to recognize breast cancer, and to teach sign language to the deaf.
I'm wondering:
1) what other ideas we can work on in those fields?
2) how much will my choice of graduation project affect my application for a masters degree?
3) Is a stocks price prediction expert system too advanced for us?
Thanks a lot.
1) what other ideas we can work on in those fields?
It's amazing to me how little imagination computer science students seem to have. Stackoverflow.com is rife with questions about first projects from beginners and students.
I think that using statistics and data in novel ways, like Peter Norvig's spell checker, would be most interesting and fruitful.
Dr. Peter Norvig is a well-known computer science professor and AI guru. He's the CTO of Google now. Perhaps you can mine a choice out of his writings.
2) how much will my choice of
graduation project affect my
application for a masters degree?
Depends on too many other factors that you don't mention, like your past record as a student, etc. Probably a minor factor, in my opinion. Nobody is admitted to a masters program on the basis of a graduation project. Neither your undergrad project nor a masters thesis is a doctoral dissertation. Don't get them confused.
3) Is a stocks price prediction expert system too advanced for us?
I think stock price prediction is too advanced for anybody. After years of applying Fourier analysis, statistical models, Monte Carlo simulations, etc. if it were possible to do it would have been done.
2) how much will my choice of graduation project affect my application for a masters degree?
If you are applying for a PhD, the faculty in the prospective department tend to favor students who are interested in the research they are doing, or who have demonstrated the ability to do their own research. For a Masters these are not much of an issue, but they can make a little difference.
3) Is a stocks price prediction expert system too advanced for us?
Well, if you did then you would start using it to make money, others would see what you are doing an imitate you so that pretty soon your arbitrage opportunity would be gone.
Still, these type of systems are often built by students in machine learning classes, mostly due to the fact that there is a lot of data freely available and well formatted data on stock prices, so its easy to get starting writing the program. It is a good way to get insight into machine learning algorithms.
1) What other ideas we can work on in those fields?
Find some problem that you are passionate about, will learn something from by tackling it, and is within the scope of your time, effort, and ability. Projects like this are relevant not only for grad school but also when applying for entry-level jobs (even if a few years off still after doing a masters degree)l. It helps to pick something you can put on a resume that shows your level of accomplishment and ability to complete a task.
2) How much will my choice of graduation project affect my application for a masters degree?
The topic choice probably won't matter significantly except perhaps for top-tier programs or if you have notable weaknesses in other admissions criteria. If the latter is true, then a good project may help, but even the latter is uncertain. Masters program admissions I think is generally handled by administrative staff, so they are probably more interested in whether or not you did a project than what the topic is.
3) Is a stocks price prediction expert system too advanced for us?
Yes, a stock price prediction system is far too difficult if you want a system that actually can work reasonably well over anything other than a small training data set.
The market is neither a natural system, a machine, nor even a system of rational collective behavior. Its pricing mechanism is in general irrational: investors/traders may make transactions at prices that are reasonable for them relative to their own decision criteria, but the market as a whole is generally not rational. The market is more an aggregation of behavior rather than collective behavior.
The above alone would make for an intensively difficult problem to solve with AI methods, but beyond that there are issues of problem scale, the amount of training data which is needed, etc.
There are of course a large number of Wall Street trading firms using quantitative methods for high-frequency trading, etc. They are effective, however, because they are focused on narrow problems (price trends over the next few seconds-to-minutes in highly-liquid stocks, S&P index futures, etc.), they put a lot of work into their models and generally are constantly rebuilding the latter on a daily/weekly basis, and they understand the market's nature, i.e., it's largely irrational as a whole and is a competitive, shifting landscape of exploiting the pricing inefficiencies inherent to large money flows.
I would only recommend this problem domain if you have an intense personal interest in financial markets and have already spent a lot of time studying them, are prepared to fail, and are interested in learning a lot. Trying to work on this problem is certainly a good learning opportunity, but it will be hard to achieve any real success except for small problems unless you have many years to devote.
1) what other ideas we can work on in
those fields?
Dr. Russel Greiner has a nice list of possible student projects in machine learning, several of which are related to medicine.
2) how much will my choice of
graduation project affect my
application for a masters degree?
It probably won't matter very much. However, choosing a ridiculously easy project probably won't help. I'm sure that you'll be vetting whatever you choose with your prof, so don't worry about that so much. Find a topic you're passionate about first and foremost.
3) Is a stocks price prediction expert
system too advanced for us?
Yes. Don't bother with that nonsense. The game of Go will be solved before anyone figures out the stock market.
1) what other ideas we can work on in
those fields?
Are there any faculty members at your university that work in the field of bioinformatics? If so, talk to them and see if they give you a suitable project idea that gets you excited. If you decide to take this path, try to enroll in an Intro to Bioinformatics course as it will help you get familiar with the field and generally make things easier.
As a researcher I am curious to hear what people think of Multi-Agent Systems if of course you came across the whole idea. Do you believe there is something more in there than just a hype and another buzzword? Can you see any potential uses in business or everyday computing? Or do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?
I am a research professor who has published many articles in the Autonomous Agents and Multiagent Systems Conference (AAMAS): the main vehicle for multiagent research.
MAS is a term used by researchers (coined around 1995, for the first International Conference on Multiagent Systems (ICMAS), which brought together the Distributed Artificial Intelligence (DAI) and the Autonomous Agents research communities under one tent: the MAS tent) that refers to algorithms and methods for organizing teams of autonomous agents. Researchers in MAS have developed algorithms for Robot soccer (see Robocup), coordinating autonomous robot rovers (as in Mars rovers), distributed allocation of resources (who does which task), and many other domains.
I don't see that there is any "hype" as you describe. You can read all the papers in past conferences and each one clearly states what the author tried to accomplish, how he tried it, and what the results were. I do not know of anyone making silly claims about the power of these techniques: they are all just algorithms (isn't everything). No magic here.
The question
do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?
is incorrect in that, if you can solve a MAS problem with a simple and more elegant solution, your solution is now a MAS solution!
MAS is a problem domain, along with the solutions we found so far. If you find a better solution then, awesome, publish it and join the MAS community.
As an aside, I see this confusion often. Journeymen programmers don't realize that research communities are (usually) defined by the problem they work on, not a solution approach.
Compared to many other fields of Artificial Intelligence and Technology, multi-agent systems aren't hyped enough!
I find people who know nothing about Multi-agent systems and are actively in the field of technology, programming, and "artificial intelligence". (quoted since it now is hype, and has effectively lost all meaning)
I learned about Multi-agent systems in 2008 through Netlogo, and it changed my perspective about problem solving using computational technology. I realized at the time that these types of programs would require increasing computing power. More recently I have learned all the hype driven stuff (data-science, ML, DNN, RL, etc.) I think all this hype will integrate well with MAS, and has yet to be fully understood. Many people are introduced to this thought through MMO gaming, which has been a huge hit, so there may be a leap yet to come.
As a computer software expert witness, I am required to analyze a huge range of different software technologies. During my deposition or trial testimony, the opposing expert may direct questions targeted at exposing or revealing my weaknesses. There is no time for research or education.
Given that I can't be an expert in every technology, what are the most versatile and transferrable skills or technologies I should learn?
I will start with the obvious:
Databases are omnipresent (but which are the best archetypes?)
C is often involved due to the prevalence of older Windows and DOS based systems
What should be added this list?
I may be mis-reading your question, but I suspect that if you are being called upon as an expert witness, you already have the expertise they are seeking; I suppose that learning more technical aspects of any technology would make you more likely to become an expert witness, but ultimately I would recommend the best skill would be of truthfulness. If you don't know, say so. Any unknown questions can then become the "to be studied" list for later review.
just my 2 cents ...
It would be silly to call you as an expert witness if you cannot be an expert in the line of questioning.
Well, the big thing about being a witness is to listen to the counsel for which you are testifying. In the computer world, your credibility is not easily impugned. If they were to try to do so, it would be by calling into question formal education or training as insufficient to be an expert. They won't be asking you to explain what a Turing Machine is, or how to write a sorting algorithm in LISP, unless it is directly relevant to the matter at hand. They won't be playing "Gotcha!" with difficult technical questions, as it won't resonate with the judge/jury .How many jury members can you picture saying this: "I can't BELIEVE that "expert" doesn't understand database normalization! what a fraud!"? If the jury doesn't understand the question, they won't understand the answer. Any 1st year law student will tell you all about this problem (it comes up it all kinds of expert testimony situations).
No, your credibility will be questioned in laymen's terms. If you are being asked to testify, it's because you have the answers that are relevant. Stick to those and don't do any tricks (as your counsel will tell you), and you'll be fine. If your information is correct, and your degree/experience is solid, you may not even be cross-examined (they will just find their own expert to say the opposite of what you said).
Computer software expert witnesses need to also have a good understanding of networking technology and be able to explain it to a jury or judge. Because a great deal of software is client/server based, being able to explain the way firewalls, ip address, http, internet routers works and why you can tell that certain pieces of software were definitely used at certain times and locations is important.
Being familiar with server operating systems and the log files they generate is also helpful.
I would say forget learning new technology outside understanding industry concepts and how they're really applied in the real world. The key thing you need to be able to do as an expert witness is explain these concepts in terms that can be easily understood by the layman. You already know this stuff or you wouldn't be the expert witness. You're there because your name and reputation are thought well of and they [prosecution/defence] need your help.
I think of it like this: The lawyer/barrister/attorney's job is to sell their vision of the truth and get the jury to buy into their vision [skewed as that vision may or may not be]. Your job is to sell the facts. Either the two are one and the same, or they aren't. Sell the facts to the best of your ability, if you have easily understood examples [by easily understood, I mean by an 8 year old], all the better.
Key concepts I would think would be software systems that people will use/exploit to either commit or to cover up a crime:
Networking systems: Common protocols, packet tracing etc.
Firewall systems and common exploits.
Viruses and replication: Worms vs. Trojans etc.
Major Operating Systems: Basic concepts and common exploits.
Web Applications: How they're structured and how they can be exploited.
Common hacking concepts: DoS, OOB, SQL Injection etc.
Email concepts: transmission, receipt, tracking, header information.
Data storage and recovery concepts and key software.
Surveillance Techniques: Packet analysis, key loggers etc.
I'm sure there are a few others, but no others immediately spring to mind
Definitely learn about email systems. I'd imagine email communications come into play fairly often in court cases these days. Learn how SMTP and POP3 work. Learn the basics of email servers and what ways they can be manipulated and how difficult it is to do.
i think you're deceiving yourself, what is a "computer software expert witness"?. That's like saying, you're an electrical engineer, and so you have the capacity to answer any engineering questions, whether they're from chemical, mechanical, civil or other specific area of engineering.