I am using ms graph api for java. At the beginning of the skipToken i have received inside #odata.nextLink, there is an unexpected character (m~) before actual skip token string (Can be seen below). Skip token string works fine after i get rid of m~.
But i am confused why this has happened and can other unexpected characters effect skipToken in the future? And what can i do to prevent that?
I am using msgraph java sdk version 2.4.0.
https://graph.microsoft.com/v1.0/users?$select=givenName%2csurname%2cuserPrincipalName%2cbusinessPhones%2cassignedPlans&$count=true&$orderby=displayName&$filter=&$top=2&$skiptoken=m~X%270100B7013B3B33303030343530303330303033323030333030303435303033313030343130303330303034353030333230303331303033303030343530303334303033383030333030303435303033323030333130303330303033373030333030303332303033303030343530303431303033323030333030303435303033303030333230303330303034353030333730303330303033303030343530303330303034313030333030303435303033323030333130303B313B303B%27
I'm not clear how did you get the m~ in skiptoken, but I can get the pages of users success by microsoft graph api sdk for java with below code:
package com.graph;
import java.util.List;
import com.azure.identity.ClientSecretCredential;
import com.azure.identity.ClientSecretCredentialBuilder;
import com.microsoft.graph.authentication.TokenCredentialAuthProvider;
import com.microsoft.graph.models.User;
import com.microsoft.graph.requests.GraphServiceClient;
import com.microsoft.graph.requests.UserCollectionPage;
public class Testgraph {
public static void main(String[] args) {
final ClientSecretCredential clientSecretCredential = new ClientSecretCredentialBuilder()
.clientId("clientId")
.clientSecret("clientSecret")
.tenantId("tenantId")
.build();
final TokenCredentialAuthProvider tokenCredentialAuthProvider = new TokenCredentialAuthProvider(clientSecretCredential);
final GraphServiceClient graphClient = GraphServiceClient
.builder()
.authenticationProvider(tokenCredentialAuthProvider)
.buildClient();
//You can use the code below to get current page users
UserCollectionPage users = graphClient.users()
.buildRequest()
.get();
List<User> userList=users.getCurrentPage();
for (User user:userList) {
System.out.println(user.displayName);
}
//If you want to get nextpage, you can use below code
UserCollectionPage users1 = users.getNextPage().buildRequest().get();
List<User> userList1=users1.getCurrentPage();
for (User user:userList1) {
System.out.println(user.displayName);
}
}
}
Related
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredential(FirebaseCredentials.applicationDefault())
.setDatabaseUrl("https://mkastrive.firebaseio.com")
.build();
FirebaseApp defaultApp = FirebaseApp.initializeApp(options);
DatabaseReference ref = defaultDatabase
.getInstance()
.getReference("users");
ref.addListenerForSingleValueEvent(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
System.out.println("in onDataChange");
System.out.println(dataSnapshot.getValue());
}
#Override
public void onCancelled(DatabaseError databaseError) {
System.out.println("in onCancelled");
System.out.println(databaseError.toString());
}
});
I'm doing the above in the Google Cloud Module in Android. I think my Firebase's initialization is successful because System.out.println("usersRef.push(): " + usersRef.push()); // Working
But I do not see anything for addListenerForSingleValueEvent. I do not see any error/warnings in the logs either. My database rules are set up for anyone to be able to read/write data.
Update 1: According to the suggestion on using setValue(), I tried the example on the firebase's documents:
DatabaseReference usersRef1 = ref.child("users");
Map<String, User> users = new HashMap<String, User>();
users.put("alanisawesome", new User("June 23, 1912", "Alan Turing"));
users.put("gracehop", new User("December 9, 1906", "Grace Hopper"));
usersRef1.setValue(users);
But this is not inserting in the database either, and also no errors. Log's blank.
Update 2:
Some logs
FirebaseApp defaultApp = FirebaseApp.initializeApp(options);
this.defaultDatabase = FirebaseDatabase.getInstance().getReference();
defaultDatabase.child("users").getPath(): https://mkastrive.firebaseio.com
defaultDatabase.child("users").getPath(): /users
Calling push() doesn't make any changes to the database. That's probably why your listener isn't being invoked. push() just returns a DatabaseReference that you can use to make changes at the location represented by that reference. The key of that location (the unique push id) is generated completely on the client.
Try actually writing a value to the database using setValue() on the DatabaseReference returned by push().
We have a SOAP web service that we are migrating from JBoss EAP 5.1 to 6.4.7 and one of the webservices returns absolutely nothing but 200 (in JBoss 5). When we migrated to 6 it still works and returns nothing but returns a 202 instead and that is going to break clients. We have no control over clients. I tried a SOAPHandler at the close method but it does nothing as it is not even called as my guess is that since there is no SOAP message going back there is nothing that triggers the handler.
I also tried to access the context directly in the web method and modif but it did nothing.
MessageContext ctx = wsContext.getMessageContext();
HttpServletResponse response = (HttpServletResponse) ctx.get(MessageContext.SERVLET_RESPONSE);
response.setStatus(HttpServletResponse.SC_OK);
I couldn't find anything in the manual.
Any direction is very much appreciated.
Here is how the port and its implementation look like:
Here is how the port and its implementation head look like:
#WebService(name = "ForecastServicePortType", targetNamespace = "http://www.company.com/forecastservice/wsdl")
#SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE)
#XmlSeeAlso({
ObjectFactory.class
})
public interface ForecastServicePortType {
/**
*
* #param parameters
* #throws RemoteException
*/
#WebMethod(action = "http://www.company.com/forecast/sendForecast")
public void sendForecast(
#WebParam(name = "SendForecast", targetNamespace = "http://www.company.com/forecastservice", partName = "parameters")
SendForecastType parameters) throws RemoteException;
}
#WebService(name = "ForecastServicePortTypeImpl", serviceName = "ForecastServicePortType", endpointInterface = "com.company.forecastservice.wsdl.ForecastServicePortType", wsdlLocation = "/WEB-INF/wsdl/ForecastServicePortType.wsdl")
#HandlerChain(file = "/META-INF/handlers.xml")
public class ForecastServicePortTypeImpl implements ForecastServicePortType {
...
}
In case anybody will find this useful. Here is the solution;
Apache CXF by default uses async requests and even if the annotation #OneWay is missing it still behaves as it if was there.
So in order to disable that an interceptor needs to be created like below:
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.cxf.binding.soap.SoapMessage;
import org.apache.cxf.binding.soap.interceptor.AbstractSoapInterceptor;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.phase.Phase;
import java.util.Arrays;
public class DisableOneWayInterceptor extends AbstractSoapInterceptor {
private static final Log LOG = LogFactory.getLog(DisableOneWayInterceptor.class);
public DisableOneWayInterceptor(){
super(Phase.PRE_LOGICAL);
addBefore(Arrays.asList(org.apache.cxf.interceptor.OneWayProcessorInterceptor.class.getName()));
}
#Override
public void handleMessage(SoapMessage soapMessage) throws Fault {
if(LOG.isDebugEnabled())
LOG.debug("OneWay behavior disabled");
soapMessage.getExchange().setOneWay(false);
}
}
And called in WebService class (annotated with #WebService) as below:
#org.apache.cxf.interceptor.InInterceptors (interceptors = {"com.mycompany.interceptors.DisableOneWayInterceptor" })
Found the answer to my question: For those having the same problem.
ANSWER: When working with HTTP servlets i needed to have the jars within the WEB-INF/lib directory. Else i could just keep them under the java build path (libraries). Thus in eclispe, right click on lib, then Add Google API's and the select BigQuery.
I am testing out google app engine with big query.
I am able to run big query fine in eclipse when I run it as an app, however when i run it as an HttpServlet i keep getting the following error!
java.lang.NoClassDefFoundError: com/google/api/client/json/JsonFactory
Below is the exact code I am using.
package com.hw3.test;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.bigquery.Bigquery;
import com.google.api.services.bigquery.BigqueryScopes;
import com.google.api.services.bigquery.model.GetQueryResultsResponse;
import com.google.api.services.bigquery.model.QueryRequest;
import com.google.api.services.bigquery.model.QueryResponse;
import com.google.api.services.bigquery.model.TableCell;
import com.google.api.services.bigquery.model.TableRow;
import java.io.IOException;
import javax.servlet.http.*;
import java.util.List;
import java.util.Scanner;
#SuppressWarnings("serial")
public class HelloWord3Servlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
Bigquery bigquery = createAuthorizedClient(); //If i comment this out i will get the text below, else i get the error from the title.
resp.setContentType("text/plain");
resp.getWriter().println("\nQuery Results:\n------------\n");
}
private static List<TableRow> executeQuery(String querySql, Bigquery bigquery, String projectId)
throws IOException {
QueryResponse query = bigquery.jobs().query(projectId, new QueryRequest().setQuery(querySql)).execute();
// Execute it
GetQueryResultsResponse queryResult = bigquery.jobs()
.getQueryResults(query.getJobReference().getProjectId(), query.getJobReference().getJobId()).execute();
return queryResult.getRows();
}
public static Bigquery createAuthorizedClient() throws IOException {
// Create the credential
HttpTransport transport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential credential = GoogleCredential.getApplicationDefault(transport, jsonFactory);
// Depending on the environment that provides the default credentials
// (e.g. Compute Engine, App
// Engine), the credentials may require us to specify the scopes we need
// explicitly.
// Check for this case, and inject the Bigquery scope if required.
if (credential.createScopedRequired()) {
credential = credential.createScoped(BigqueryScopes.all());
}
return new Bigquery.Builder(transport, jsonFactory, credential).setApplicationName("Bigquery Samples").build();
}
public static void main(String[] args) throws IOException {
Scanner sc;
if (args.length == 0) {
// Prompt the user to enter the id of the project to run the queries
// under
System.out.print("Enter the project ID: ");
sc = new Scanner(System.in);
} else {
sc = new Scanner(args[0]);
}
String projectId = sc.nextLine();
// Create a new Bigquery client authorized via Application Default
// Credentials.
Bigquery bigquery = createAuthorizedClient();
List<TableRow> rows = executeQuery(
"SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words " + "FROM [publicdata:samples.shakespeare]",
bigquery, projectId);
printResults(rows);
}
private static void printResults(List<TableRow> rows) {
System.out.print("\nQuery Results:\n------------\n");
for (TableRow row : rows) {
for (TableCell field : row.getF()) {
System.out.printf("%-50s", field.getV());
}
System.out.println();
}
}
}
I got this code directly from the google website although i did modify it slightly so that i can test out app engine. However it will not work when using app engine.
Any help is greatly appreciated!
It sounds like dependencies aren't configured correctly when you are running as an HttpServlet. How do you tell your app which dependencies to use? What version are you trying to load? Is that version available in Google App Engine?
Note that the specific version of the jackson libraries you require change depending on what environment you are running in. See https://developers.google.com/api-client-library/java/google-http-java-client/setup for a list of dependencies you need in various environments.
ANSWER: When working with HTTP servlets i needed to have the jars within the WEB-INF/lib directory. Else i could just keep them under the java build path (libraries). Thus in eclispe, right click on lib, then Add Google API's and the select BigQuery.
I need some help with Flink Streaming. I have produced a simple Hello-world type of code below. This streams Avro messages from RabbitMQ and persists it to HDFS. I hope someone can review the code, and maybe it can help others.
Most examples I've found for Flink streaming sends results to std-out. I actually wanted to save the data to Hadoop. I read that, in theory, you can stream with Flink to wherever you like. I haven't found any example saving data to HDFS actually. But, based on the examples I did find, and trials and errors, I have come with the below code.
The source of the data, here, is RabbitMQ. I use a client app to send "MyAvroObjects" to RabbitMQ. MyAvroObject.java - not included - is generated from avro IDL... Can be any avro message.
The code below, consumes the RabbitMQ messages, and saves this to HDFS, as avro files... Well, that's what I hope.
package com.johanw.flink.stackoverflow;
import java.io.IOException;
import org.apache.avro.io.Decoder;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.mapred.AvroKey;
import org.apache.avro.mapred.AvroOutputFormat;
import org.apache.avro.mapred.AvroWrapper;
import org.apache.avro.mapreduce.AvroJob;
import org.apache.avro.specific.SpecificDatumReader;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.hadoop.mapred.HadoopOutputFormat;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.typeutils.TypeExtractor;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.FileSinkFunctionByMillis;
import org.apache.flink.streaming.connectors.rabbitmq.RMQSource;
import org.apache.flink.streaming.util.serialization.DeserializationSchema;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class RMQToHadoop {
public class MyDeserializationSchema implements DeserializationSchema<MyAvroObject> {
private static final long serialVersionUID = 1L;
#Override
public TypeInformation<MyAvroObject> getProducedType() {
return TypeExtractor.getForClass(MyAvroObject.class);
}
#Override
public MyAvroObject deserialize(byte[] array) throws IOException {
SpecificDatumReader<MyAvroObject> reader = new SpecificDatumReader<MyAvroObject>(MyAvroObject.getClassSchema());
Decoder decoder = DecoderFactory.get().binaryDecoder(array, null);
MyAvroObject MyAvroObject = reader.read(null, decoder);
return MyAvroObject;
}
#Override
public boolean isEndOfStream(MyAvroObject arg0) {
return false;
}
}
private String hostName;
private String queueName;
public final static String path = "/hdfsroot";
private static Logger logger = LoggerFactory.getLogger(RMQToHadoop.class);
public RMQToHadoop(String hostName, String queueName) {
super();
this.hostName = hostName;
this.queueName = queueName;
}
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
public void run() {
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
logger.info("Running " + RMQToHadoop.class.getName());
DataStream<MyAvroObject> socketStockStream = env.addSource(new RMQSource<>(hostName, queueName, new MyDeserializationSchema()));
Job job;
try {
job = Job.getInstance();
AvroJob.setInputKeySchema(job, MyAvroObject.getClassSchema());
} catch (IOException e1) {
e1.printStackTrace();
}
try {
JobConf jobConf = new JobConf(Job.getInstance().getConfiguration());
jobConf.set("avro.output.schema", MyAvroObject.getClassSchema().toString());
org.apache.avro.mapred.AvroOutputFormat<MyAvroObject> akof = new AvroOutputFormat<MyAvroObject>();
HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable> hof = new HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable>(akof, jobConf);
FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>> fileSinkFunctionByMillis = new FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>(hof, 10000l);
org.apache.hadoop.mapred.FileOutputFormat.setOutputPath(jobConf, new Path(path));
socketStockStream.map(new MapFunction<MyAvroObject, Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>() {
private static final long serialVersionUID = 1L;
#Override
public Tuple2<AvroWrapper<MyAvroObject>, NullWritable> map(MyAvroObject envelope) throws Exception {
logger.info("map");
AvroKey<MyAvroObject> key = new AvroKey<MyAvroObject>(envelope);
Tuple2<AvroWrapper<MyAvroObject>, NullWritable> tupple = new Tuple2<AvroWrapper<MyAvroObject>, NullWritable>(key, NullWritable.get());
return tupple;
}
}).addSink(fileSinkFunctionByMillis);
try {
env.execute();
} catch (Exception e) {
logger.error("Error while running " + RMQToHadoop.class + ".", e);
}
} catch (IOException e) {
logger.error("Error while running " + RMQToHadoop.class + ".", e);
}
}
public static void main(String[] args) throws IOException {
RMQToHadoop toHadoop = new RMQToHadoop("localhost", "rabbitTestQueue");
toHadoop.run();
}
}
If you prefer another source, other than RabbitMQ, then it works fine using another source instead. E.g. using a Kafka consumer:
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082;
...
DataStreamSource<MyAvroObject> socketStockStream = env.addSource(new FlinkKafkaConsumer082<MyAvroObject>(topic, new MyDeserializationSchema(), sourceProperties));
Questions:
Please review. Is this good practice for saving data to HDFS?
What if the process of streaming is causing an issue, say during serialisation. It generates and exception, and the code just exits. Spark streaming depends on Yarn automatically restarting the app. Is this also good practice when using Flink?
I'm using the FileSinkFunctionByMillis. I was actually hoping to use something like a HdfsSinkFunction, but that doesn't exist. So the FileSinkFunctionByMillis was the closest to this, which made sense to me. Again the documentation that I found lacks any explanation what to do, so I'm only guessing.
When I run this locally, then a I find a directory structure like "C:\hdfsroot_temporary\0_temporary\attempt__0000_r_000001_0", which is... basare. Any ideas here?
By the way, when you want to save the data to Kafka back, I was able to do so using...
Properties destProperties = new Properties();
destProperties.setProperty("bootstrap.servers", bootstrapServers);
FlinkKafkaProducer<MyAvroObject> kafkaProducer = new FlinkKafkaProducer<L3Result>("MyKafkaTopic", new MySerializationSchema(), destProperties);
Many thanks in advance!!!!
I think FileSinkFunctionByMillis can be used but this would mean that your streaming program is not fault-tolerant. Meaning that if your sources or machines or writing fail then your program will crash without being able to recover.
I suggest you look at using the RollingSink (https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#hadoop-filesystem). This can be used to create Flum-like pipelines to ingest data into HDFS (or other file systems). The rolling sink is a recoverable sink, meaning that your program would be fault-tolerant since the Kafka consumer is also fault-tolerant. Also you can specify a custom Writer to write the data in any format you want, for example Avro.
I am porting my Google App Engine app from the BlobStore to the Google Cloud Store.
I found that in GAE SDK 1.9.7 they deprecated all the .getServingURL() methods that took BlobKey and replaced them with on that takes a ServingUrlOptions object as configuration.
This make sense and seems to work, but there doesn't seem to be any matching .deleteServingUrl() that takes a GcsFilename?
I found the following in the SdkReleaseNotes but it doesn't clarify how you actually do this?
Version 1.7.0 - June 26, 2012
You can now use get_serving_url() and delete_serving_url() for Google Cloud Storage buckets.
There is nothing in the ImagesService javadoc that appears to do the job.
How do you delete a serving url that is created with a GcsFilename?
Solution
After way too much digging through JavaDocs, I discovered:
BlobKey createGsBlobKey(java.lang.String filename)
Here is the complete solution I ended up with.
Imports:
import com.google.appengine.api.blobstore.BlobKey;
import com.google.appengine.api.blobstore.BlobstoreService;
import com.google.appengine.api.blobstore.BlobstoreServiceFactory;
import com.google.appengine.api.images.ImagesService;
import com.google.appengine.api.images.ImagesServiceFactory;
import com.googlecode.objectify.Work;
import com.vertigrated.gae.codex.service.datastore.entity.ImageMetadata;
import javax.annotation.Nonnull;
import java.util.UUID;
import static com.googlecode.objectify.ObjectifyService.ofy;
And the code:
private static final BlobstoreService BLOBSTORE_SERVICE;
private static final ImagesService IMAGES_SERVICE;
static
{
BLOBSTORE_SERVICE = BlobstoreServiceFactory.getBlobstoreService();
IMAGES_SERVICE = ImagesServiceFactory.getImagesService();
}
#Override
public boolean delete(#Nonnull final UUID uuid)
{
return ofy().transact(new Work<Boolean>()
{
#Override
public Boolean run()
{
final ImageMetadata im = ofy().load().type(ImageMetadata.class).id(uuid.toString()).now();
final BlobKey bk = BLOBSTORE_SERVICE.createGsBlobKey(im.getFilename().toString());
IMAGES_SERVICE.deleteServingUrl(bk);
ofy().delete().entity(im);
return ImageMetadataEntityService.this.delete(uuid);
}
});
}