I would like to improve the feature matching between two images by using a pre-trained semantic segmentation model. Of course I am familiar with the ORB or SIFT features and matching with OpenCV in python, but I am curious if it is possible to further improve that by using a precise semantic segmentation model. The purpose of this is I would like to calculate the epipolar geometry between 2 images. Does it make any sense to do that? Is there any algorithm that does that? I could not find anything on the internet.
You don't have to go all the way to hard-decisions on semantic boundaries to enrich your correspondences with semantic information.
You can replace SIFT/ORB, which are low-level features, with semantic descriptors, such as DINO-ViT features. Check out this project page and see how these features can be used for establishing semantic correspondences between images.
Related
I am new to programming and would like to create a chatbot(I know a little about arithmetic, statistic, linear algebra but no knowledge yet in ML/DL/AI theory. And as I'm starting, I haven't done any projects yet. But the final goal I set myself is to be able to create a chatbot with artificial intelligence. But after some research, I saw that it will take me quite a long time.
So I set myself an intermediate level. One just to create a chatbot that can send and reply to messages automatically. To this end, the programming languages that have been recommended to me are: Python, Ruby, PhP, Java... but (in view of my final objective : creating a chatbot with AI) I would like to know which programming language will be more useful and more appropriate for me?
[RE]: Given my situation, I haven't started a project yet (I'm looking for the right language to be able to get started). Yes, I know I'm repeating myself but that's why I can't present a community-specific problem. Besides since I just learned that my question is a matter of opinion and that it does not respect the rules of the platform, I humbly ask the moderators to remove it.
Thanks !
Hey that’s an interesting project to do.
As you are more focused on the artificial intelligence I would stick with the biggest and most common ML language:
Python - this is currently the biggest Machine Learning language and allows you to use open source tensorflow for your ML models.
I think what you will find interesting and challenging, once you go into more complex sentences is dealing with natural language processing, Python has the nltk (Natural Language Toolkit) that’s a good place to start and learn from.
Once you have gotten a possible basic python console chat system working you might want to show it off in a nicer presented way so more so you could wrap it in a simple python api and call upon it using a small JavaScript web browser chat application. Although your more interested in the first part so I’d suggest go with python.
I’d start off by trying to make the ai respond to predefined strings and then go from there. It’s worth nothing there is a number of open-source GitHub projects that have ML and Natural Language Processing bots so have a little look around for inspiration. https://github.com/topics/chatbot
Also fyi if your writing a report on this doing detailed investigative work in what tooling and language to use is an important part of your report and you should gather information and sources about usage etc and then reason as to why.
Hope this points you in the correct direction and good luck 👍
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
We are trying to develop a workflow, utilizing React react-sketchapp Sketch, that a designer in Sketch can produce React components and the coding (at least for the CSS) is automatically generated and configurable. There doesn't seem to be a lot of documentation and/or support.
We understand the basics of developing a component in React to Sketch, but has anyone figured out a workflow for the reverse? Any additional thoughts or documentation that could help?
We are thinking of dropping react-sketchapp and going to a more effective workflow
We have spent much time on this problem, so we can share some experience we have learned during product development.
First at all, the react-sketchapp is "react to sketch" direction, not sketch to react.
For sketch to react problem, the hardest things we found are
the difference of tree structure between sketch & react code
Designer focus on the visual, and won't care about the item structure, but engineers DO care about the structure of the DOM because it's designed for dynamic layout and resizable
So our way is to use some smart algorithm (or AI in the future) with little human intervention. Sometimes it's really hard to decide because there might be different right answers in terms of layout
the description or settings for the resizing layout
Sketch (the design tool) won't have any concept and hint about the layout (especially the relationship among children). So sometimes engineers are required to discuss with designer the exact behaviors while the component is resizing.
So it's required a way to convert the structure into some kinds of settings of layout. After trying, our suggestions would be CSS flex and grid. Because flex and grid provide great flexibility for positioning and be widely used and recognized by frontend engineers, also with high browser compatibility.
These two issues above are hard to be automated 100%, but after several iterations, we think it can be done 70% automated with some algorithms and a little human intervention. The generated code would be useful with right layout structure. After that, it could be improved to better code by applying some code optimization (shared style, better naming, simplified css/code, ...etc) which is easier to resolve.
I've been working with this concept (Sketch to React translation) and haven't found an effective/configurable solution and I don't think there will be any time soon, or at least one that put to use the best parts of Sketch with the best parts of React.
Sketch has an amazing symbol system that allows for design tokens (such as a global color palette or typography styles) to be re-used or instanced from a central location – a source of truth.
Unfortunate circumstance 1: These symbols sets do not translate over into variable sets (CSS or SCSS, etc.) when exported into a React component – all the values are hard coded and no variables are present. You may be able to create a very custom implementation to do this, but it will most certainly break the second you reconfigure your build process or tech stack choices.
Unfortunate circumstance 2: Sketch symbols or UI elements don't manage their own state, you simple swap them out with a different symbol/element. There's far too much involved to translate multiple sketch elements into the logic of a React component that manages it's own state. Especially when you take into account the complexity of modern build streams and technology stacks.
How my team has worked around this:
Shared building blocks and standards that bridge the gap between design and development can REALLY help. (An example is a set of CSS color variables that maps to a set of Sketch color symbols). Atomic level pieces of a system can be translated between Sketch and CSS with relative ease – then all you have to do is put the puzzle pieces together. Put this together with a good layer of communication between designers and developers, and your team is far ahead of the margins.
I am trying to build my understanding of OpenGL and see how the 'pros' do things. I am looking for examples of these objects (preferably in C) - I've learned through examples so I think it would help others to see it as well :)
I'm very much a nubie at openGL so this may be a stupid question - I'm just looking for objects I can mess around with to get more familiar with openGL. I have found that it's easier for me to pick things up by tweaking an example until it breaks, then fix it :)
#Nicol Bolas - When I say composite objects I mean objects that when linked together create something 'larger.' An example would be a car. It has a body and tires. Maybe I'm not using the correct term here?
You might be interested in this list of OpenGL based games and applications, particularly the Open Source games.
Also, Ogre 3D is a well-known Open Source graphics engine with an OpenGL renderer.
Feel free to answer the question in the title as generally as I posed it, I offer some more details and specifics below.
Currently I develop and maintain a somewhat legacy business app (ASP/SQL) that is highly customizable allowing for moderate to full customization on: custom fields, forms, views, reports, actions, events, workflows, etc. This customization is necessary in the domain we develop for and has allowed us to build a niche.
I have been reading up on the inner-platform effect and ways of implementing high level user defined customization and have concluded that we do suffer from many of the inner-platform effect problems because essentially we have created a high level abstraction on top of the SQL. The organization of custom fields is implemented in a similar way to the approach found here
http://blog.springsource.com/arjen/archives/2008/01/24/storing-custom-fields-in-the-database/
We use something similar to the meta database method described in that article. All customization is built around this approach and in many ways we suffer from database on top of a database.
The end result is something that looks fantastic on paper yet the more features are added and custom coding is done for clients the more of a mess everything becomes. It seems that the more I read the more I realize this is somewhat of an anti-pattern. It also seems that the more I try to read the more I find so little has been written on the topic. Anyways, I am trying to learn modern approaches to this problem and trying to find more discussion/articles on the topic. Are Database systems such as CouchDB relevant to this type of application?
My question is clearly pretty general. It seems like there is a lot against this kind of application in favor of just "knowing and defining your domain better". Are there any good/better ways to implement this kind of application? I'm not looking for black and white answers, and any further readings on the subject would be fantastic. Thanks for any help.
My answer is be conscious and clear about what is for a plugin to do and what is a user setting. In that case, your platform and your settings are different. Your application provides basic services and is unabashedly a platform. It may also provide an application built on that platform.
So in that case you focus on programmer interfaces instead of implementation possibilities.
The standard advice in CS is to create another level of abstraction, not sure if that's not the problem here.
The only advice I could give is to push as much functionality onto the database, given it's the platform. SQL Server supports custom functions, fields and stored (SQL) procedures.
Either that or try to pull repeated functionality into separate functions in ASP.
Behind the tables and tables of raw data, how does Wolfram Alpha work?
I imagine there are various artificial intelligence mechanisms driving the site but I can't fathom how anyone would put something like this together. Are there any explanations that would help a programmer understand how something like this is created? Does the knowledge base learn on its own or is it taught very specific details in a very organized manner? What kind of structure and language is used to store this type of data?
Obviously this is a huge question and can't fully be answered here but some of the general concepts would be nice to know so I can build off of them and do my own research.
Does the knowledge base learn on its
own or is it taught very specific
details in a very organized manner?
AI systems are usually something distinctly in between. The system will usually learn in a directed way, where the developers can apply a metric that measures the quality of the learning, and the system learns by attempting to maximise that metric. Where the expertise comes in is in developing efficient and effective representations of the data, so that it lends itself to this learning process and to the measurement of how well the learning is going.
Take a look at the API
This official blog post has some portion of the explanation: the language Mathematica.
Looks like a large number of algorithms from which some that might be relevant are selected by pattern matching.