I was able to work with dbus as client, but if I compile https://github.com/bratsche/glib/blob/master/gio/tests/gdbus-example-server.c
on_name_acquired callback is called and intermediately after on_name_lost callback is called.
The only changes that I made is that I use G_BUS_TYPE_SYSTEM instead of G_BUS_TYPE_SESSION
I only guess that this is some authentication issue.
Unlike the session bus, the system bus has a security policy which prevents arbitrary processes from claiming arbitrary well-known names on the bus. You need to install a configuration file for the system bus to allow your service to own a name:
Rules with the own or own_prefix attribute are checked when a
connection attempts to own a well-known bus names. As a special case,
own="*" matches any well-known bus name. The well-known session bus
normally allows any connection to own any name, while the well-known
system bus normally does not allow any connection to own any name,
except where allowed by further configuration. System services that
will own a name must install configuration that allows them to do so,
usually via rules of the form <policy user="some-system-user"><allow own="…"/></policy>.
This means installing a configuration file like the following in /usr/share/dbus-1/system.d/org.mydomain.MyService1.conf:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE busconfig PUBLIC
"-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
<!-- Only my-service-user can own the service -->
<policy user="my-service-user">
<allow own="org.mydomain.MyService1"/>
</policy>
<!-- Anyone can send messages to the service -->
<policy context="default">
<allow send_destination="org.mydomain.MyService1"/>
</policy>
</busconfig>
You must then run your service’s process as the my-service-user user.
The D-Bus API design tutorial section on security policies is relevant reading.
Related
Is there a way to configure snowflakes connection pooling in websphere application serve.
I tried below config inside server.xml file. But not working.
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties db="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
To clarify, the configuration that you have configures WebSphere Application Server Liberty's connection pooling for a Snowflake data source, rather than Snowflake's connection pooling.
The configuration that you have looks mostly pretty good.
When I looked up the SnowflakeBasicDataSource class that you are using, I can see that it has a property called "databaseName", not "db", so you'll need to switch that in your configuration.
You will also need to configure one of the jdbc-4.x features in Liberty if you haven't already, and if you plan to look it up in JNDI (vs inject it), you'll need the jndi-1.0 feature.
Here is an example with some corrections:
<featureManager>
<feature>jdbc-4.2</feature>
<feature>jndi-1.0</feature>
... your other features here
</featureManager>
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties databaseName="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
If this still doesn't work, look into your definition of the DatacloudLibs library to ensure that it is properly pointing at the Snowflake JDBC driver, and if it still doesn't work, post the error message that you see in case it helps to determine the cause.
I am building something on Linux that includes syslog.h but I want to override the define for _PATH_LOG which currently points to /dev/log
I want the syslog API for this program only to point to a different socket and never send to /dev/log due to some unreasonable constraints imposed by Systemd. How can I override this define for this build alone? The define is nested in syslog-path.h which is included by syslog.h, so my program indirectly includes the header which defines this variable.
If you include syslog.h, you are using your machine's system logger (for example, syslogd).
By design, apps generate syslogs, and the system logging server decides where the logs go. If you want to use the system logger and have your app's logs go to a custom endpoint, you will need to modify your system logging server's configuration. You cannot specify a custom endpoint in code from within your app.
If you want to send syslogs directly from your app to a remote syslog server (bypassing the system logger), you can do that by using a third-party syslog client library.
I have a SOAP/REST service implemented in CXF inside Red Hat JBoss Fuse (in a Fabric).
I need to protect it with Basic Authentication, and credentials must be checked on a LDAP server.
Can this be done without a custom interceptor?
Can I maybe use the container JAAS security (configured with LDAP) to protect the service the same way I can protect the console?
Yes the container JAAS security realm can be used to protect a web service.
An example is here.
The example page doesn't explain the implementation, but a quick look at the blueprint.xml file reveals the following configuration:
<jaxrs:server id="customerService" address="/securecrm">
<jaxrs:serviceBeans>
<ref component-id="customerSvc"/>
</jaxrs:serviceBeans>
<jaxrs:providers>
<ref component-id="authenticationFilter"/>
</jaxrs:providers>
</jaxrs:server>
<bean id="authenticationFilter" class="org.apache.cxf.jaxrs.security.JAASAuthenticationFilter">
<!-- Name of the JAAS Context -->
<property name="contextName" value="karaf"/>
</bean>
So it's just a matter of configuring a JAAS authentication filter.
"karaf" is the default JAAS realm for the container: users are defined in etc/users.properties
To define more realms, info is here.
To have users on LDAP, see here.
The answer above is correct, but please note that for more recent versions of Fuse (past 6.1), the "rank" in the LDAP configuration must be greater than 100 in order to override the default karaf realm.
Also, with current patches applied, in Fuse 6.2.X, connection pooling for the LDAP connections can be enabled:
<!-- LDAP connection pooling -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html -->
context.com.sun.jndi.ldap.connect.pool=true
</jaas:module>
</jaas:config>
This is very important for high volume web-services. A connection pool is maintained to the LDAP server. This both avoids connection creation overhead and having closing sockets lingering in TIME-WAIT state.
I need to set up a secure website to transfer data between 5 computers located in different states. The data is sensitive.
I am planning to use drupal. However, I read many articles about drupal 7 getting hacked . So I want to restrict website access using web.config . As far as I understand nobody can even try to hack the website because it will be not accessible from any IP not listed in the web.config Does this guarantee 100% protection ?
<security>
<ipSecurity allowUnlisted="false"> <!-- this line blocks everybody, except those listed below -->
<clear/> <!-- removes all upstream restrictions -->
<add ipAddress="127.0.0.1" allowed="true"/> <!-- allow requests from the local machine -->
<add ipAddress="83.xxx.xx.53" allowed="true"/> <!-- allow the specific IP of 83.116.19.53 -->
<add ipAddress="83.xxx.xx.0" subnetMask="xxx.255.255.0" allowed="true"/> <!--allow network 83.116.119.0 to 83.116.119.255-->
<add ipAddress="83.xxx.0.0" subnetMask="2xx55.255.0.0" allowed="true"/> <!--allow network 83.116.0.0 to 83.116.255.255-->
<add ipAddress="83.xxxx.0.0" subnetMask="255.0.0.0" allowed="true"/> <!--allow entire /8 network of 83.0.0.0 to 83.xxx.255.255-->
</ipSecurity>
</security>
As long as your web server is exposed to the Internet, it is vulnerable to hacking attempts. There could be a flaw, current or future, that allows the hacker to bypass application level IP restrictions.
You should explicitly deny access for most IP addresses to the web server (typically Port 80 for HTTP, and Port 443 for HTTPS... which I'm sure you are using as your website deals with secure data). Explicitly allow access only for the IPs you have listed.
Issue:
An issue exists whereby I cannot access my Self Hosted ADO.NET Data Services from my RIA applications.
My services are hosted separately to the web projects with the Rich Internet Applications (RIA)s.
I need to enable access from separate Silverlight (and Flash) client apps.
From Silverlight I get an exception (see below) when I try to make a call to the ADO.NET Data Service (which is Self Hosted separately). This I believe to due to Silverlight forbidding the cross domain call.
System.InvalidOperationException: An error occurred while saving changes. See the inner exception for details. --->
System.Data.Services.Http.WebException: Internal error at 'HttpWebResponse.NormalizeResponseStatus'.
at System.Data.Services.Http.HttpWebResponse.NormalizeResponseStatus(Int32& statusCode)
at System.Data.Services.Http.HttpWebResponse..ctor(HttpWebRequest request, Int32 statusCode, String responseHeaders)
at System.Data.Services.Http.HttpWebRequest.CreateResponse()
at System.Data.Services.Http.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Data.Services.Client.QueryAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Data.Services.Client.BaseAsyncResult.EndExecute[T](Object source, String method, IAsyncResult asyncResult)
at System.Data.Services.Client.QueryAsyncResult.EndExecute[TElement](Object source, IAsyncResult asyncResult)
at System.Data.Services.Client.DataServiceQuery`1.EndExecute(IAsyncResult asyncResult)
at Curo.Silverlight.MainPage.<>c__DisplayClass1.<.ctor>b__0(IAsyncResult ar)
at System.Data.Services.Client.BaseAsyncResult.HandleCompleted()
at System.Data.Services.Client.QueryAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult)
at System.Data.Services.Http.HttpWebRequest.ReadyStateChanged()
System.Data.Services.Http.WebException: Internal error at 'HttpWebResponse.NormalizeResponseStatus'.
at System.Data.Services.Http.HttpWebResponse.NormalizeResponseStatus(Int32& statusCode)
at System.Data.Services.Http.HttpWebResponse..ctor(HttpWebRequest request, Int32 statusCode, String responseHeaders)
at System.Data.Services.Http.HttpWebRequest.CreateResponse()
at System.Data.Services.Http.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Data.Services.Client.QueryAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult)
Notes:
From what I have read, it appears that cross domain access is forbidden with regards to ADO.NET Data Services, which may result in my having to take another approach to the data access e.g. using a pure REST Framework..?
"The problem of Cross Domain ADO.NET
Data Services is more complex than it
sounds and it hasn't been solved.
I've discussed it with Microsoft for a
while now and the reason that it
doesn't work has to do with its using
a browser level transport and that
transport doesn't allow cross-site
scripting."
See:
http://forums.silverlight.net/forums/p/70925/170703.aspx#170703
I understand that I need may need to expose a ClientAccessPolicy.xml file which will define the access rules whilst restricting cross site scripting.
It is also noteworthy to mention that the RIA applications will be running on the same LAN.
Questions:
Is there a viable means for me to access the services from my RIA clients considering they will be running behind the same firewall? If so how?
How do I expose ClientAccessPolicy.xml from a Self Hosted ADO.NET Data Service exactly?
What way would you recommend proceeding in order to allow external access to my services?
- Different REST Framework?
- Host Services within same web project at the cost of separation?
- Any other advice...
Thanks.
I'm not sure I understand the full breadth of your problem, but at the very least, I would make sure I had a clientaccesspolicy.xml file and a crossdomain.xml file in the root folder of the service. It's important for the xml policy files to be in the root folder of the domain. For example, if your service is hosted in mycompany.com/services, the xml files need to be in the mycompany.com folder, not the services folder.
Here's an example of the ClientAccessPolicy.xml:
<?xml version="1.0" encoding="utf-8" ?>
<access-policy>
<cross-domain-access>
<policy>
<allow-from http-request-headers="*">
<domain uri="*"/>
</allow-from>
<grant-to>
<resource include-subpaths="true" path="/"/>
</grant-to>
</policy>
</cross-domain-access>
</access-policy>
And here's an example of the crossdomain.xml:
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-http-request-headers-from domain="*" headers="*" />
</cross-domain-policy>
I would recommend using both files for both flash and silverlight. Both files above will allow open access from all flash and silverlight apps, but that shouldn't be a problem if you're behind a firewall.
I had this exact problem in one of my behind-the-firewall silverlight apps and putting these files in place seemed to fix the problem. I would start with these files and go from there.
"The problem of Cross Domain ADO.NET Data Services is more complex than it sounds and it hasn't been solved. I've discussed it with Microsoft for a while now and the reason that it doesn't work has to do with its using a browser level transport and that transport doesn't allow cross-site scripting."
See: http://forums.silverlight.net/forums/p/70925/170703.aspx#170703
The cross domain policy is required by (as shown in the answer by Ben McCormack above).
By utilizing Yahoo pipes which is set up to allow cross domain access to aggregated feeds, you may be able to consume and external ADO.NET Data Services (formerly Astoria, now OData) from within a Silverlight application.
You will most likely lose the fidelity of querying the dataset that Odata gives you, but this could be recreated in the yahoo pipes.
The issue was not with the ADO.NET data services (OData), its was with Silverlight as does not allow cross domain calls.