LDAPMAP - Mapping SAP data to LDAP via RSLDAPSYNC_USER function - active-directory

We are looking at syncing some of our LDAP (Active Directory) data with what is stored in SAP. SAP provides several function modules that allow you to write a custom program to handle mapping the data, but we are looking to use the provided solution that makes use of RSLDAPSYNC_USER.
The issue I'm having is understanding how the mapping of fields is performed in LDAPMAP. In particular, when performing the Mapping Overview, where are the structures as shown below defined?
Also, we have a function module that is currently available for grabbing all of the fields we would like to send to LDAP, but can the screen shown below be used to call a custom function module to grab the data I require? If so, then please give an example.
Thanks,
Mike

I am not sure if that is what you ask. As an answer to your second question:
You can give attributes that you want to get. The LDAP_READ function will return the results in entries parameter.
CALL FUNCTION 'LDAP_READ'
EXPORTING
base = base
* scope = 2
filter = filter
* attributes = attributes_ldap
timeout = s_timeout
attributes = t_attributes_ldap
IMPORTING
entries = t_entries_ldap "<< entries will come
EXCEPTIONS
no_authoriz = 1
conn_outdate = 2
ldap_failure = 3
not_alive = 4
other_error = 5
OTHERS = 6.
Entries parameter looks like:
Attributes parameter looks like:

Related

How do I create a Flow with a different input and output types for use inside of a graph?

I am making a custom sink by building a graph on the inside. Here is a broad simplification of my code to demonstrate my question:
def mySink: Sink[Int, Unit] = Sink() { implicit builder =>
val entrance = builder.add(Flow[Int].buffer(500, OverflowStrategy.backpressure))
val toString = builder.add(Flow[Int, String, Unit].map(_.toString))
val printSink = builder.add(Sink.foreach(elem => println(elem)))
builder.addEdge(entrance.out, toString.in)
builder.addEdge(toString.out, printSink.in)
entrance.in
}
The problem I am having is that while it is valid to create a Flow with the same input/output types with only a single type argument and no value argument like: Flow[Int] (which is all over the documentation) it is not valid to only supply two type parameters and zero value parameters.
According to the reference documentation for the Flow object the apply method I am looking for is defined as
def apply[I, O]()(block: (Builder[Unit]) ⇒ (Inlet[I], Outlet[O])): Flow[I, O, Unit]
and says
Creates a Flow by passing a FlowGraph.Builder to the given create function.
The create function is expected to return a pair of Inlet and Outlet which correspond to the created Flows input and output ports.
It seems like I need to deal with another level of graph builders when I am trying to make what I think is a very simple flow. Is there an easier and more concise way to create a Flow that changes the type of it's input and output that doesn't require messing with it's inside ports? If this is the right way to approach this problem, what would a solution look like?
BONUS: Why is it easy to make a Flow that doesn't change the type of its input from it's output?
If you want to specify both the input and the output type of a flow, you indeed need to use the apply method you found in the documentation. Using it, though, is done pretty much exactly the same as you already did.
Flow[String, Message]() { implicit b =>
import FlowGraph.Implicits._
val reverseString = b.add(Flow[String].map[String] { msg => msg.reverse })
val mapStringToMsg = b.add(Flow[String].map[Message]( x => TextMessage.Strict(x)))
// connect the graph
reverseString ~> mapStringToMsg
// expose ports
(reverseString.inlet, mapStringToMsg.outlet)
}
Instead of just returning the inlet, you return a tuple, with the inlet and the outlet. This flow can now we used (for instance inside another builder, or directly with runWith) with a specific Source or Sink.

What is the best practice way to initialize an actor from the database

I have a top level actor (under the guardian), called Groups which on startup needs to load the list of groups from the database and create a bunch of child actors based on those groups in the database.
I have placed the database load code inside of the preStart function as I don't want any messages to be processed before the groups are loaded.
Currently my Groups actor looks like this;
var groups: Map[String, ActorRef] = Map()
override def preStart() = {
groups = getGroupsFromDB() map createGroup
}
def createGroup(pair: (String, Long)) = {
val (name, id) = pair
val group = context.actorOf(Props(new Group(id, name)), name = name)
name -> group
}
However I don't believe this is the best way to handle this, as what happens if the database server is not available? So what is the best pratice way of handling data initialization from a database?
The Akka documentation explains how to supervise top level actors for fault-tolerance.
You can apply the principles there to manage the exceptions you may find if the DB is not available.

Document status that depend on the user type object

I have the following objects: L1User, L2User, L3User (all inherits from User) and Document.
Every user can create the document but depending on the user type, the document will have a different status. So in case it's L1User, the document will be created with L1 status and so on:
Solution 1
Please note that after document is created, it will be saved in the database, so it should be natural to have a method create_document(User user) in Document object. In the method body I could check which type is the user and set manually appropriate status. Such approach seems rather not OOP to me.
Solution 2
Ok, so the next approach would be to have all users implement a common method (say create_document(Document doc)) which will set a status associated with the user and save the document in the database. My doubt here is that the document should be saved in it's own class, not the user.
Solution 3
So the final approach would similar to the above, except that the user will return modified document object to it's create_document(User user) method and save will be performed there. The definition of the method would be like this:
create_document(User user)
{
this = user.create_document(this);
this->save();
}
It also doesn't seems right to me...
Can anyone suggest a better approach?
I think that both Solutions 2 and 3 are ok from the OO point of view, since you are properly delegating the status assignment to the user object (contrary to solution 1, whare you are basically doing a switch based on the user type). Whether to choose 2 or 3 is more a matter of personal tastes.
However, I have a doubt: why do you pass a document to a create_document() method? I would go for a message name that best describes what it does. For example, in solution 3 (the one I like the most) I would go for:
Document>>create_document(User user)
{
this = user.create_document();
this->save();
}
and then
L1User>>create_document()
{
return new Document('L1');
}
or
Document>>create_document(User user)
{
this = new Document()
this = user.set_document_type(this);
this->save();
}
and then
L1User>>set_document_type(document)
{
document.setType('L1');
}
Edit: I kept thinking about this and there is actually a fourth solution. However the following approach works only if the status of a document doesn't change through its lifetime and you can map the DB field with a getter instead of a property. Since the document already knows the user and the status depends on the user, you can just delegate:
Document>>getStatus()
{
return this.user.getDocumentStatus();
}
HTH

Mapping first 3 records in database to variables

I have a test spec where I use the following line of code to assign 3 variables to session tokens within my table:
#auth_token, #auth2_token, #auth3_token = Session.limit(3).map(&:token)
I now wish to assign 3 variables as a role classes from my Roles table which isn't restricted to one attribute only but the whole class. I have tried the following but it doesnt seem to be working:
#role1, #role2, #role3 = Role.limit(3).map
Can this be achieved? Any pointers would be greatly appreciated !!
It works for the auth tokens because map converts the relation object into an array which then gets assigned to the variables. For the roles just calling map returns an enumerable and not an array.
You can just call to_a directly on the relation object returned by the limit call in order to convert it to an array.
#role1, #role2, #role3 = Role.limit(3).to_a
Wasn't sure how to go about this but got round the problem using the following:
#role1 = Role.find_by_name!("First")
#role2 = Role.find_by_name!("Second")
#role3 = Role.find_by_name!("Third")

Play Scala: How to access multiple databases with anorm and Magic[T]

I want to access two databases in Play Scala with anorm and Magic[T], (one is H2 and another is PostgreSQL). I just don't know how to config it...
I noticed that we can set another database connection in conf/application.conf
db_other.url=jdbc:mysql://localhost/test
db_other.driver=com.mysql.jdbc.Driver
db_other.user=root
db_other.pass=
However, how can I use it with Magic?
(I read the source code of Magic but don't understand it... my am a freshman of Scala)
Anyhow, if multiple database access is impossible with Magic[T] , I wish to do it with anorm, then how can I config it?
var sqlQuery = SQL( //I guess some config params should be set here, but how?
"""
select * from Country
"""
)
In play.api.db.DB it appears you can pass in a string of the name you defined in application.conf.
Then use one of the methods specified here: http://www.playframework.org/documentation/2.0/ScalaDatabase
# play.api.db.DB.class
def withConnection[A](name : scala.Predef.String)(block : scala.Function1[java.sql.Connection, A])(implicit app : play.api.Application) : A = { /* compiled code */ }

Resources