After consolidating 3 different alfresco application, OOTB search is not working - solr

I have consolidated 3 different alfresco application, initially we have 3 separate applications in alfresco.
but now we have consolidated those 3 applications and all applications repository layer code is running on single instance and there are 3 different share instances.
But here alfresco OOTB search functionality is not working properly.
I also found the root cause that all 3 applications are having search.get.js file
but for all these application we have some custom code in each file.
so can anyone please suggest me the way to use all 3 search.get.js files to achieve search functionality.

Related

Document/action/command permission/access control, rbac, abac

I'm developing a business web application for procurement. I use ASP.NET MVC Core 2.0, EF core 2.0, MSSQL Server. I've been searching for an answer for my question on the Internet for a couple of months but couldn't find one. Lets say i have a document and i want to control permissions for different actions on the document (read, edit, delete, edit doc number, edit different attributes of it, etc.). So i developed some IPermissionControl interface and several classes implementing this interface such as RoleBasedPermissionControl and etc. Now i have a problem. All my business logic is in .Net app, not in Db. So when user opens all permitted documents list all the documents are fetched from the db and checked for permission in the app. It makes a big performance issue, so if db has 1000000 docs, and user has permissions only for 10 of them, it is a bad idea to fetch all of them. How to solve this issue with leaving all the BL in c# app?
Thanks in advance!

Setting up Kibana with an Existing Solr Instance

I have an existing Solr instance (4.9) up and running with several cores. I've been trying to set up Kibana for the majority of the day and I can't figure out how to incorporate it with my instance of Solr. I'm running locally on Windows 7 for dev purposes but the production is Linux. I've read through here and here a few times and I'm not picking up on how to get this done. The banana project seemed like the easiest choice but adding it to the banana/ directory did nothing. I started up LucidWorks but wasn't able to figure out how to get my existing cores in here. I have about 1.5TB of data in all of my cores (9 of them) so re-indexing is not an option.
Can someone provide me with resources or tutorials on how to incorporate Kibana with an existing Solr instance or a tutorial on how this is done?
If banana can fulfill your requirements, it is still the easiest choice. I recently set up banana for one of my Solr 5 cores (even though it was released for a 4.x series as far as I can tell), and also had the initial problem that "nothing happened".
Get it from https://github.com/LucidWorks/banana/ and then follow the instructions in the QUICKSTART file:
Copy Banana folder to $SOLR_HOME/example/solr-webapp/webapp/
Browse to http://localhost:8983/solr/banana/src/index.html#/dashboard
Perform the copy so that you will have a banana folder in webapp/, do not copy the contents of the banana folder directly in webapp/
Further information on the banana dashboard configuration and its options that I found useful are here: https://docs.lucidworks.com/display/SiLK/Dashboard+Configuration

Oracle ADF: How to develop a adf web application with a team of 10 members

Hello we need to develop a web application using Oracle ADF and jdeveloper 12c. This is a big project so we need to develop this application with the help of a team of size 10 members. Our doubt is how we can develop a web application using a team.
Suppose that we have 10 modules. Each module has developed by each member(Each member is using different system with same environment. Jdeveloper 12c). Finally developing all the 10 modules how we can combine all the 10 modules as a single application. How we can modify bindings , page flows and connection details.
Please help.
Thanks in advance.
A comprehensive look at ADF team size and roles you can find on ADF Architecture TV channel.
In terms of architecture, I would suggest SUM of the Parts Pattern:
Simply put, every member can use their own workspace where they will create bounded task flows based on fragments. Then, each of those workspaces will be packaged as ADF Libraries and imported into a MASTER workspace, acting as a portal. One of the most common patterns for implementing Master portal is Dynamic Tabs UI Shell Template Functional UI Pattern
I suggest you watch this to understand ADF large project architecture better, and read this, this and this as well. This book may be of use.
The best approach will be using a version control tool like SVN. Initially, anyone member of the team need to migrate the blank project structure to SVN. later on others can checkout the code and implement their code and after that they can check in the code. The problem here is you have to assume that no 2 developers are working on same file at same time, or else code conflict may happen while committing the code.
Currently, I am also working on a project where some developers are from onshore and some from offshore. So, we use the SVN. Jdevloper itself comes with SVN Client. So no need of installing any SVN client like Tortoise SVN or Smart SVN.
If still you need more information you can reply back.

Apache Solr setup for two diffrent projects

I just started using apache solr with it's data import functionality on my project
by following steps in http://tudip.blogspot.in/2013/02/install-apache-solr-on-ubuntu.html
but now I have to make two different instances of my project on same server with different databases but with same configuration of solr for both projects. How can I do that?
Please help me if anyone can?
Probably the closest you can get is having two different Solr cores. They will run under the same server but will have different configuration (which you can copy paste).
When you say "different databases", do you mean you want to copy from several databases into one join collection/core? If so, you just define multiple entities and possibly multiple datasources in your DataImportHandler config and run either all of them or individually. See the other question for some tips.
If, on the other hand, you mean different Solr cores/collections, then you just want to run Solr with multiple cores. It is very easy, you just need solr.xml file above your collection level as described on the Solr wiki. Once you get the basics, you may want to look at sharing instance directories and having separate data directories to avoid changing the same config twice (instanceDir vs. dataDir settings on each core).

Multiple Solr One Application Server (Jboss 7.1)

Is it possibile to have multiple Solrs in the same application server?
If yes, how can I do it?
Im in need of 3 Solr instance and I want them running at the same application server.
Im using Solr 3.6 and Jboss 7.1
Thanks in advance!
It basically depends on what exactly your requirement is.
If your requirement is just to have 3 separate indexes to search upon 3 different modules within a single application, you could probably go with multiple cores in same Solr server.
Refer http://wiki.apache.org/solr/CoreAdmin for more details regarding Solr cores.
If you are planning to host a separate search server for 3 independent applications, then I would suggest you go with 3 Solrs on different ports, as given in above answer.
Yes. You can deploy them on different ports.
http://localhost:8080/solr1
http://localhost:8081/solr2
http://localhost:8082/solr3
and so on.
Check out the instructions from this link http://wiki.apache.org/solr/SolrJBoss

Resources