In Swift, lazy properties allow us to only initialise a class member when we ask for it instead of directly at runtime - useful for computationally expensive operations.
I have a class in Swift 4 that is responsible for initialising a strategy from an array of compile-time (developer-hardcoded) provided StrategyProtocol objects. It looks something like this:
class StrategyFactory {
private var availableStrategies: [StrategyProtocol] = [
OneClassThatImplementsStrategyProtocol(),
AnotherThatImplementsStrategyProtocol() // etc
]
public func createStrategy(): StrategyProtocol {
// Depending on some runtime-calculated operation
// How do I do this nicely here?
}
}
However, from my understanding, placing () at the end of each strategy initialises the objects(?), when I may only want to create one depending on certain runtime conditions.
Either way, is it possible to place lazy somewhere around the values in an Array class member to only instantiate the one I want when I ask for it? Or would I have to go about this with closures or some other alternative?
Current attempt
Is this doing what I think it is? Until I get the first element of the array and execute it, it won't actually instantiate the strategy?
private var availableStrategies: [() -> (StrategyProtocol)] = [
{ OneClassThatImplementsStrategyProtocol() }
]
Your "Current attempt" does what you think it does. You have an array
of closures, and the strategy is initialized only when the closure is
executed.
A possible alternative: Store an array of types instead of
instances or closures (as Zalman Stern also suggested).
In order to create instances on demand, a
init() requirement has to be added to the protocol (which must then
be satisfied by a required init() unless the class is final,
compare Why use required Initializers in Swift classes?).
A possible advantage is that you can query static properties
in order to find a suitable strategy.
Here is a small self-contained example, where createStrategy()
creates and returns the first "fantastic" strategy:
protocol StrategyProtocol {
init()
static var isFantastic: Bool { get }
}
class OneClassThatImplementsStrategyProtocol : StrategyProtocol {
required init() { }
static var isFantastic: Bool { return false }
}
final class AnotherThatImplementsStrategyProtocol : StrategyProtocol {
init() { }
static var isFantastic: Bool { return true }
}
class StrategyFactory {
private var availableStrategies: [StrategyProtocol.Type] = [
OneClassThatImplementsStrategyProtocol.self,
AnotherThatImplementsStrategyProtocol.self // etc
]
public func createStrategy() -> StrategyProtocol? {
for strategy in availableStrategies {
if strategy.isFantastic {
return strategy.init()
}
}
return nil
}
}
ANYCLASS, META TYPE AND .SELF may answer your question. (I am not expert on Swift, but use of metaclasses is likely what you want and Swift, as I expected, appears to support them.) You can look through this Stack Overflow search.
EDIT: In case it wasn't clear, the idea is to have the array of strategies contain the metaclasses for the protocols rather than instantiations. Though this depends on whether you want a new strategy object for each instantiation of the class with the lazy property or whether strategies are effectively global and cached ones created. If the latter, then the lazy array approach for holding them might work better.
Related
How can i create array which will hold objects belonging a specific class.
class BaseObject {}
class Derived1: BaseObject {}
class Derived2: BaseObject {}
class Derived2: BaseObject {}
I need to create array in which will hold only Object derived from BaseObject
Something like - var array : [BaseObject.Type] = []
Is there a way to specify this ?
Also, I should be able to use it something like this
if let derived1 = object as? [Derived1] {
}
else if let derived2 = object as? [Derived2] {
}
You can obviously define your array as an array of BaseObject:
var objects: [BaseObject] = [] // or `var objects = [BaseObject]()`
But it's going to let you create a heterogenous collection (of either BaseObject or Derived1 or Derived2 or of any other subclass). That's a core OO design concept (the Liskov substitution principle) that any subclass of BaseObject should (and will) be permitted.
If all you want is to say that you can only have an array of one of the subtypes, you can obviously just define your array as such, e.g.:
var objects: [Derived1] = []
That will obviously allow only Derived1 objects (and any subclasses of Derived1.
90% of the time, the above is sufficient. But in some cases, you might needs some collection with methods that require some inherited base behavior, but for which you don't want to allow heterogenous collections. In this case, I might consider a more protocol-oriented pattern:
Bottom line, should we be subclassing, or should we be using a protocol-oriented approach? I.e. is BaseObject actually something you'll instantiate for its own purposes, or is it there merely to define some common behavior of the subclasses. If the latter, a protocol might be a better pattern, e.g.:
protocol Fooable {
func foo()
}
// if you want, provide some default implementation for `foo` in an
// protocol extension
extension Fooable {
func foo() {
// does something unique to objects that conform to this protocol
}
}
struct Object1: Fooable {}
struct Object2: Fooable {}
struct Object3: Fooable {}
This yields the sort of behavior that you may have been using in your more OO approach, but using protocols. Specifically, you write one foo method that all of the types that conform to this protocol, e.g., Object1, Object2, etc., can use without having to implement foo themselves (unless, of course, you want to because they need special behavior for some reason).
Because this eliminates the subclassing, this then opens the door for the use of generics and protocols that dictate some generalized behavior while dictating the homogenous nature of the members. For example:
struct FooCollection<T: Fooable> {
private var array = [T]()
mutating func append(_ object: T) {
array.append(object)
}
// and let's assume you need some method for your collection that
// performs some `Fooable` task for each instance
func fooAll() {
array.forEach { $0.foo() }
}
}
This is a generic which is a homogenous collection of objects that conform to your protocol. For example, when you go to use it, you'd declare a particular type of Fooable type to use:
var foo = FooCollection<Object1>()
foo.append(Object1()) // permitted
foo.append(Object2()) // not permitted
foo.fooAll()
Now, I only went down this road because in comments elsewhere, you were inquiring about generics. I'd personally only go down this road if the (a) collection really needed to be homogenous; and (b) the collection also wanted to implement some shared logic common to the protocol. Otherwise, I'd probably just stick with a simple [Derived1] (or [Object1]). The above can be powerful when needed, but is overkill for simpler situations.
For more discussion about protocol oriented programming, the homogenous vs heterogenous behavior, traditional stumbling blocks when you're coming from a traditional OO mindset, I'd refer you to the WWDC 2015 video, Protocol-Oriented Programming in Swift, or it's 2016 companion video that builds upon the 2015 video.
Finally, if you have any additional questions, I'd suggest you edit your question providing details on a practical problem that you're trying to solve with this pattern. Discussions in the abstract are often not fruitful. But if you tell us what the actual problem you're trying to solve with the pattern in your question, it will be a far more constructive conversation.
I want to get rid of clone() method.
For the below class sonar (static code check tool) was complaining that
I should not directly expose an internal array of the object as one can change the array after the method call which in turn changes the object's state. It suggested to do a clone() of that array before returning so that object's state is not changed.
Below is my class...
class DevicePlatformAggregator implements IPlatformListings{
private DevicePlatform[] platforms = null;
public DevicePlatform[] getAllPlatforms() throws DevicePlatformNotFoundException {
if (null != platforms) {
return platforms.clone();
}
List<DevicePlatform> platformlist = new ArrayList<DevicePlatform>();
..... // code that populates platformlist
platforms = platformlist.toArray(new DevicePlatform[platformlist.size()]);
return platforms;
}
}
However I don't think its good to clone as its unnecessary to duplicate the data.
There is nothing similar to Collections.unmodifiableList() for array
I can not change the return type of the method getAllPlatforms() to some
collection as it is an interface method
I am not a Java guru but I am pretty confident that you are out of luck here. There is no way to make a primitive array immutable apart from creating an array of 0 elements.
Making it final won't help cause only the reference pointing to it would be immutable.
As you already said the way to go in obtaining an unmodifiable list would be to use Collections as in the following example:
List<Integer> contentcannotbemodified= Collections.unmodifiableList(Arrays.asList(13,1,8,6));
Hope it helps.
Please see this JSON returned when asked the question : What are the basic methods of Optional ? This is not the perfect match answer that is being returned in the Retrieve and Rank Tooling ( pasted below this JSON snippet). Can you please help me understand why this is happening?
{
"context": {
"conversation_id": "f87c08f1-2122-4d44-b0bc-d05cd458162d",
"system": {
"dialog_stack": "[root]",
"dialog_turn_counter": "1.0",
"dialog_request_counter": "1.0"
}
},
"inquiryText": "what are the basic methods of Optional",
"responseText": "Going to get answers from Devoxx corpus, ",
"resources": [
{
"id": "\"50a305ba-f8fd-4470-afde-884df5170e29\"",
"score": "1.5568095",
"title": "\"no-title\"",
"body": "\"Voxxed JUnit 5 – The Basics Nicolai Parlog 5 months ago Categories: Methodology Tags: java , JUnit , JUnit 5 19 SHARES Facebook Twitter Reddit Google Mail Linkedin Digg Stumbleupon Buffer Last time, we set up JUnit 5 to be able to write tests. So let’s do it! Overview This post is part of a series about JUnit 5: Setup Basics Architecture Conditions Injection … Most of what you will read here and more can be found in the emerging JUnit 5 user guide . Note that it is based on an alpha version and hence subject to change.Indeed, we are encouraged to open issues or pull requests so that JUnit 5 can improve further. Please make use of this opportunity! It is our chance to help JUnit help us, so if something you see here could be improved, make sure to take it upeam .This post will get updated when it becomes necessary. The code samples I show here can be found on GitHub . Philosophy The new architecture, which we will discuss another time, is aimed at extensibility. It is possible that someday very alien (at least to us run-of-the-mill Java devs) testing techniques will be possible with JUnit 5. But for now the basics are very similar to the current version 4. JUnit 5’s surface undergoes a deliberately incremental improvement and developers should feel right at home. At least I do and I think you will, too: Basic Lifecycle And Features class Lifecycle { #BeforeAll static void initializeExternalResources() { System.out.println(\\\"Initializing external resources...\\\"); } #BeforeEach void initializeMockObjects() { System.out.println(\\\"Initializing mock objects...\\\"); } #Test void someTest() { System.out.println(\\\"Running some test...\\\"); assertTrue(true); } #Test void otherTest() { assumeTrue(true); System.out.println(\\\"Running another test...\\\"); assertNotEquals(1, 42, \\\"Why wouldn't these be the same?\\\"); } #Test #Disabled void disabledTest() { System.exit(1); } #AfterEach void tearDown() { System.out.println(\\\"Tearing down...\\\"); } #AfterAll static void freeExternalResources() { System.out.println(\\\"Freeing external resources...\\\"); } } See? No big surprises. The Basics Of JUnit 5 Visibility The most obvious change is that test classes and methods do not have to be public anymore. Package visibility suffices but private does not. I think this is a sensible choice and in line with how we intuit the different visibility modifiers. Great! I’d say, less letters to type but you haven’t been doing that manually anyways, right? Still less boilerplate to ignore while scrolling through a test class. Test Lifecycle #Test The most basic JUnit annotation is #Test, which marks methods that are to be run as tests. It is virtually unchanged, although it no longer takes optional arguments. Expected exceptions can now be verified via assertions but as far as I know there is not yet a replacement for timeouts . JUnit 5 creates a new test instance for each test method (same as JUnit 4). Before And After You might want to run code to set up and tear down your tests. There are four method annotations to help you do that: #BeforeAll Executed once; runs before the tests and methods marked with #BeforeEach. #BeforeEach Executed before each test. #AfterEach Executed after each test. #AfterAll Executed once; runs after all tests and methods marked with #AfterEach. Because a new instance is created for each test, there is no obvious instance on which to call the #BeforeAll/ #AfterAll methods, so they have to be static. The order in which different methods annotated with the same annotation are executed is undefined. As far as I can tell the same is true for inherited methods. Whether it should be possible to define an order is currently being discussed . Except in name, these annotations work exactly like in JUnit 4. While not uncommon , I am not convinced of the names, though. See this issue for details. Disabling Tests It’s Friday afternoon and you just want to go home? No problem, just slap#Disabled on the test (optionally giving a reason) and run. A Disabled Test #Test #Disabled(\\\"Y U No Pass?!\\\") void failingTest() { assertTrue(false); } Test Class Lifecycle Compared to the prototype it is interesting to note that the test class lifecycle didn’t make it into the alpha version. It would run all tests on the same instance of the test class, thus allowing the tests to interact with each other by mutating state. As I already wrote while discussing the prototype: I think this is a typical case of a feature that is harmful in 99% of the cases but indispensable in the other 1%. Considering the very real risk of horrible inter-test-dependencies I’d say it was a good thing that it was taken out in its original form. But the JUnit team is discussing ways to bring it back in with a different name and added semantics. This would make its use very deliberate. What do you think? Assertions If #Test, #Before..., and #After... are a test suite’s skeleton, assertions are its heart. After the instance under test was prepared and the functionality to test was executed on it, assertions make sure that the desired properties hold. If they don’t, they fail the running test. Classic Classic assertions either check a property of a single instance (e.g. that it is not null) or do some kind of comparison (e.g. that two instances are equal). In both cases they optionally take a message as a last parameter, which is shown when the assertion fails. If constructing the message is expensive, it can be specified as a lambda expression, so construction is delayed until the message is actually required. Classic Assertions #Test void assertWithBoolean() { assertTrue(true); assertTrue(this::truism); assertFalse(false, () -> \\\"Really \\\" + \\\"expensive \\\" + \\\"message\\\" + \\\".\\\"); } boolean truism() { return true; } #Test void assertWithComparison() { List expected = asList(\\\"element\\\"); List actual = new LinkedList<>(expected); assertEquals(expected, actual); assertEquals(expected, actual, \\\"Should be equal.\\\"); assertEquals(expected, actual, () -> \\\"Should \\\" + \\\"be \\\" + \\\"equal.\\\"); assertNotSame(expected, actual, \\\"Obviously not the same instance.\\\"); } As you can see JUnit 5 doesn’t change much here. The names are the same as before and comparative assertions still take a pair of an expected and an actual value (in that order). That the expected-actual order is so critical in understanding the test’s failure message and intention, but can be mixed up so easily is a big blind spot. There’s nothing much to do, though, except to create a new assertion framework. Considering big players like Hamcrest (ugh!) or AssertJ (yeah!), this would not have been a sensible way to invest the limited time. Hence the goal was to keep the assertions focused and effort-free. New is that failure message come last. I like it because it keeps the eye on the ball, i.e. the property being asserted. As a nod to Java 8, Boolean assertions now accept suppliers , which is a nice detail. Extended Aside from the classical assertions that check specific properties, there are a couple of others. The first is not even a real assertion, it just fails the test with a failure message. 'fail' #Test void failTheTest() { fail(\\\"epicly\\\"); } Then we have assertAll, which takes a variable number of assertions and tests them all before reporting which failed (if any). #Test void assertAllProperties() { Address address = new Address(\\\"New City\\\", \\\"Some Street\\\", \\\"No\\\"); assertAll(\\\"address\\\", () -> assertEquals(\\\"Neustadt\\\", address.city), () -> assertEquals(\\\"Irgendeinestraße\\\", address.street), () -> assertEquals(\\\"Nr\\\", address.number) ); } Failure Message for ‘AssertALL’ org.opentest4j.MultipleFailuresError: address (3 failures) expected: but was: expected: but was: expected: but was: This is great to check a number of related properties and get values for all of them as opposed to the common behavior where the test reports the first one that failed and you never know the other values. Finally we have assertThrows and expectThrows. Both fail the test if the given method does not throw the specified exception. The latter also returns the exceptions so it can be used for further verifications, e.g. asserting that the message contains certain information. #Test void assertExceptions() { assertThrows(Exception.class, this::throwing); Exception exception = expectThrows(Exception.class, this::throwing); assertEquals(\\\"Because I can!\\\", exception.getMessage()); } Assumptions Assumptions allow to only run tests if certain conditions are as expected. This can be used to reduce the run time and verbosity of test suites, especially in the failure case. #Test void exitIfFalseIsTrue() { assumeTrue(false); System.exit(1); } #Test void exitIfTrueIsFalse() { assumeFalse(this::truism); System.exit(1); } private boolean truism() { return true; } #Test void exitIfNullEqualsString() { assumingThat( \\\"null\\\".equals(null), () -> System.exit(1) ); } Assumptions can either be used to abort tests whose preconditions are not met or to execute (parts of) a test only if a condition holds. The main difference is that aborted tests are reported as disabled, whereas a test that was empty because a condition did not hold is plain green. Nesting Tests JUnit 5 makes it near effortless to nest test classes. Simply annotate inner classes with #Nested and all test methods in there will be executed as well: package org.codefx.demo.junit5;// NOT_PUBLISHED import org.junit.gen5.api.BeforeEach; import org.junit.gen5.api.Nested; import org.junit.gen5.api.Test; import static org.junit.gen5.api.Assertions.assertEquals; import static org.junit.gen5.api.Assertions.assertTrue; class Nest { int count = Integer.MIN_VALUE; #BeforeEach void setCountToZero() { count = 0; } #Test void countIsZero() { assertEquals(0, count); } #Nested class CountGreaterZero { #BeforeEach void increaseCount() { count++; } #Test void countIsGreaterZero() { assertTrue(count > 0); } #Nested class CountMuchGreaterZero { #BeforeEach void increaseCount() { count += Integer.MAX_VALUE / 2; } #Test void countIsLarge() { assertTrue(count > Integer.MAX_VALUE / 2); } } } } As you can see, #BeforeEach (and #AfterEach) work here as well. Although currently not documented the initializations are executed outside-in. This allows to incrementally build a context for the inner tests. For nested tests to have access to the outer test class’ fields, the nested class must not be static. Unfortunately this forbids the use of static methods so #BeforeAll and#AfterAll can not be used in that scenario. ( Or can they? ) Maybe you’re asking yourself what this is good for. I use nested test classes to inherit interface tests , others to keep their test classes small and focused . The latter is also demonstrated by the more elaborate example commonly given by the JUnit team , which tests a stack: class TestingAStack { Stack stack; boolean isRun = false; #Test void isInstantiatedWithNew() { new Stack(); } #Nested class WhenNew { #BeforeEach void init() { stack = new Stack(); } // some tests on 'stack', which is empty #Nested class AfterPushing { String anElement = \\\"an element\\\"; #BeforeEach void init() { stack.push(anElement); } // some tests on 'stack', which has one element... } } } In this example the state is successively changed and a number of tests are executed for each scenario. Naming Tests JUnit 5 comes with an annotation #DisplayName, which gives developers the possibility to give more easily readable names to their test classes and methods. With it, the stack example which looks as follows: #DisplayName(\\\"A stack\\\") class TestingAStack { #Test #DisplayName(\\\"is instantiated with new Stack()\\\") void isInstantiatedWithNew() { /*…*/ } #Nested #DisplayName(\\\"when new\\\") class WhenNew { #Test #DisplayName(\\\"is empty\\\") void isEmpty() { /*…*/ } #Test #DisplayName(\\\"throws EmptyStackException when popped\\\") void throwsExceptionWhenPopped() { /*…*/ } #Test #DisplayName(\\\"throws EmptyStackException when peeked\\\") void throwsExceptionWhenPeeked() { /*…*/ } #Nested #DisplayName(\\\"after pushing an element\\\") class AfterPushing { #Test #DisplayName(\\\"it is no longer empty\\\") void isEmpty() { /*…*/ } #Test #DisplayName(\\\"returns the element when popped and is empty\\\") void returnElementWhenPopped() { /*…*/ } #Test #DisplayName( \\\"returns the element when peeked but remains not empty\\\") void returnElementWhenPeeked(){ /*…*/ } } } } This creates nicely readable output and should bring joy to the heart of BDD ‘ers! Reflection That’s it, you made it! We rushed through the basics of how to use JUnit 5 and now you know all you need to write plain tests: How to annotate the lifecycle methods (with #[Before|After][All|Each]) and the test methods themselves ( #Test), how to nest ( #Nested) and name ( #DisplayName) tests and how assertions and assumptions work (much like before). But wait, there’s more! We didn’t yet talk about conditional execution of tests methods, the very cool parameter injection, the extension mechanism, or the project’s architecture. And we won’t right now because we will take a short break from JUnit 5 and come back to it in about a month. Stay tuned! Window size: x Viewport size: x\"",
"docName": "\"JUnit 5 – The Basics - Voxxed.htm\""
},
{
"id": "\"0054b4e9-6b55-420e-84bc-8f31c79a949f\"",
"score": "1.2038735",
"title": "\"By Stefan Bulzan\"",
"body": "\"With the advent of lambdas in Java we now have a new tool to better design our code. Of course, the first step is using streams, method references and other neat features introduced in Java 8. Going forward I think the next step is to revisit the well established Design Patterns and see them through the functional programming lenses. For this purpose I’ll take the Decorator Pattern and implement it using lambdas. We’ll take an easy and delicious example of the Decorator Pattern: adding toppings to pizza. Here is the standard implementation as suggested by GoF: First we have the interface that defines our component: public interface Pizza { String bakePizza(); } We have a concrete component: public class BasicPizza implements Pizza { #Override public String bakePizza() { return \\\"Basic Pizza\\\"; } } We decide that we have to decorate our component in different ways. We go with Decorator Pattern. This is the abstract decorator: public abstract class PizzaDecorator implements Pizza { private final Pizza pizza; protected PizzaDecorator(Pizza pizza) { this.pizza = pizza; } #Override public String bakePizza() { return pizza.bakePizza(); } } We provide some concrete decorators for the component: public class ChickenTikkaPizza extends PizzaDecorator { protected ChickenTikkaPizza(Pizza pizza) { super(pizza); } #Override public String bakePizza() { return super.bakePizza() + \\\" with chicken topping\\\"; } } public class ProsciuttoPizza extends PizzaDecorator { protected ProsciuttoPizza(Pizza pizza) { super(pizza); } #Override public String bakePizza() { return super.bakePizza() + \\\" with prosciutto\\\"; } } And this is the way to use the new structure: Pizza pizza = new ChickenTikkaPizza(new BasicPizza()); String finishedPizza = pizza.bakePizza(); //Basic Pizza with chicken topping pizza = new ChickenTikkaPizza(new ProsciuttoPizza(new BasicPizza())); finishedPizza = pizza.bakePizza(); //Basic Pizza with prosciutto with chicken topping We can see that this can get very messy, and it did get very messy if we think about how we handle buffered readers in Java: new DataInputStream(new BufferedInputStream(new FileInputStream(new File(\\\"myfile.txt\\\")))) Of course, you can split that in multiple lines, but that won’t solve the messiness, it will just spread it. Now lets see how we can do the same thing using lambdas. We start with the same basic component objects: public interface Pizza { String bakePizza(); } public class BasicPizza implements Pizza { #Override public String bakePizza() { return \\\"Basic Pizza\\\"; } } But now instead of declaring an abstract class that will provide the template for decorations, we will create the decorator that asks the user for functions that will decorate the component. public class PizzaDecorator { private final Function toppings; private PizzaDecorator(Function... desiredToppings) { this.toppings = Stream.of(desiredToppings) .reduce(Function.identity(), Function::andThen); } public static String bakePizza(Pizza pizza, Function... desiredToppings) { return new PizzaDecorator(desiredToppings).bakePizza(pizza); } private String bakePizza(Pizza pizza) { return this.toppings.apply(pizza).bakePizza(); } } There is this line that constructs the chain of decorations to be applied: Stream.of(desiredToppings).reduce(identity(), Function::andThen); This line of code will take your decorations (which are of Function type) and chain them using andThen. This is the same as… (currentToppings, nextTopping) -> currentToppings.andThen(nextTopping) And it makes sure that the functions are called subsequently in the order you provided. Also, Function.identity() is translated to elem -> elem lambda expression. OK, now where will we define our decorations? Well, you can add them as static methods in PizzaDecorator or even in the interface: public interface Pizza { String bakePizza(); static Pizza withChickenTikka(Pizza pizza) { return new Pizza() { #Override public String bakePizza() { return pizza.bakePizza() + \\\" with chicken\\\"; } }; } static Pizza withProsciutto(Pizza pizza) { return new Pizza() { #Override public String bakePizza() { return pizza.bakePizza() + \\\" with prosciutto\\\"; } }; } } And now, this is how this pattern gets to be used: String finishedPizza = PizzaDecorator.bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto); //And if you static import PizzaDecorator.bakePizza: String finishedPizza = bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto); As you can see, the code got more clear and more concise, and we didn’t use inheritance to build our decorators. This is just one of the many design patterns that can be improved using lambdas. There are more features that can be used to improve the rest of them like using partial application (currying) to implement Adapter Pattern. I hope I got you thinking about adopting a more functional programming approach to your development style.\"",
"docName": "\"Decorator Design Pattern Using Lambdas - Voxxed.htm\""
}
]
}
When you submit a query to R&R with a trained ranker id, it will take the responses from the retrieve side of the service, and use the ranker to sort them into order based on what the ranker has "learned" about relevance.
The number of rows you fetch from the retrieve service in the first place is crucial for this.
To take the extreme case as an example, if you fetch only one row, it doesn't matter what training you've given the ranker. It has one thing to sort into order. So it will return that one thing.
If you fetch a small number of rows, for example 3, you will retrieve those three rows, and then the ranker will sort them. You will always get the same three rows - the difference the ranker makes will be what order they come in.
If you fetch a very large number of rows, for example 100, the ranker has 100 results to sort into order, so the top answer from such a query may well be different to what the top answer would be if you'd only fetched the top few.
When comparing the top result from two different apps querying the R&R service, it's therefore essential to take the rows parameter into account.
The web tooling you included a screenshot of uses a rows parameter of 30. It's retrieving 30 rows, and then using the ranker you've selected to sort them into order and display the top results.
My guess is that your application is either setting rows to something different, or not setting it at all and using the default value of 10. If you set the rows parameter in your application to 30, matching what the web tool is doing, I would expect the results to then be consistent.
There is more background about the rows parameter here : https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.shtml
Both Java and Javascript allow for a different way of executing static code. Java allows you to have static code in the body of a class while JS allows you to execute static code outside class definitions. Examples:
Java:
public class MyClass {
private static Map<String,String> someMap = new HashMap<String,String();
static {
someMap.put("key1","value");
someMap.put("key2","value");
SomeOtherClass.someOtherStaticMethod();
System.out.println(someMap);
}
}
JS (basically any JS code outside a class):
var myint = 5;
callSomeMethod();
$(document).ready(function () {
$("#hiddenelement").hide();
});
However, it seems like Dart supports either of both ways. Declaring global variables and methods is supported, but calling methods and executing code like in JS is not. This can only be done in a main() method. Also, static code inside a class is not allowed either.
I know Dart has other ways to statically fill a Map like my first example, but there is another case that I can think of for which this is required.
Let's consider the following CarRegistry implementation that allows you to map strings of the car model to an instance of the corresponding class. F.e. when you get the car models from JSON data:
class CarRegistry {
static Map<String, Function> _factoryMethods = new HashMap<String, Function>();
static void registerFactory(String key, Car factory()) {
_factoryMethods[key] = factory;
}
static Car createInstance(String key) {
Function factory = _factoryMethods[key];
if(factory != null) {
return factory();
}
throw new Exception("Key not found: $key");
}
}
class TeslaModelS extends Car {
}
class TeslaModelX extends Car {
}
In order to be able to call CarRegistry.createInstance("teslamodelx");, the class must first be registered. In Java this could be done by adding the following line to each Car class: static { CarRegistry.registerFactory("teslamodelx" , () => new TeslaModelX()); }. What you don't want is to hard-code all cars into the registry, because it will lose it's function as a registry, and it increases coupling. You want to be able to add a new car by only adding one new file. In JS you could call the CarRegistry.registerFactory("teslamodelx" , () => new TeslaModelX()); line outside the class construct.
How could a similar thing be done in Dart?
Even if you would allow to edit multiple files to add a new car, it would not be possible if you are writing a library without a main() method. The only option then is to fill the map on the first call of the Registry.createInstance() method, but it's no longer a registry then, just a class containing a hard-coded list of cars.
EDIT: A small addition to the last statement I made here: filling this kind of registry in the createInstance() method is only an option if the registry resided in my own library. If, f.e. I want to register my own classes to a registry provided by a different library that I imported, that's no longer an option.
Why all the fuss about static?
You can create a getter that checks if the initialization was already done (_factoryMethods != null) if not do it and return the map.
As far a I understand it, this is all about at what time this code should be executed.
The approach I showed above is lazy initialization.
I think this is usually the preferred way I guess.
If you want to do initialization when the library is loaded
I don't know another way as calling an init() method of the library from main() and add initialization code this libraries init() method.
Here is a discussion about this topic executing code at library initialization time
I encountered the same issue when trying to drive a similarly themed library.
My initial attempt explored using dart:mirrors to iterate over classes in a library and determine if they were tagged by an annotation like this (using your own code as part of the example):
#car('telsamodelx')
class TelsaModelX extends Car {
}
If so, they got automatically populated into the registry. Performance wasn't great, though, and I wasn't sure if how it was going to scale.
I ended up taking a more cumbersome approach:
// Inside of CarRegistry.dart
class CarRegister {
static bool _registeredAll = false;
static Car create() {
if (!_registeredAll) { _registerAll()); }
/* ... */
}
}
// Inside of the same library, telsa_model_x.dart
class TelsaModelX extends Car {}
// Inside of the same library, global namespace:
// This method registers all "default" vehicles in the vehicle registery.
_registerAll() {
register('telsamodelx', () => new TelsaModelX());
}
// Inside of the same library, global namespace:
register(carName, carFxn) { /* ... */ }
Outside of the library, consumers had to call register(); somewhere to register their vehicle.
It is unnecessary duplication, and unfortunately separates the registration from the class in a way that makes it hard to track, but it's either cumbersome code or a performance hit by using dart:mirrors.
YMMV, but as the number of register-able items grow, I'm starting to look towards the dart:mirrors approach again.
I would like to create a new array with a given type from a class object in GWT.
What I mean is I would like to emulate the functionality of
java.lang.reflect.Array.newInstance(Class<?> componentClass, int size)
The reason I need this to occur is that I have a library which occasionally needs to do the following:
Class<?> cls = array.getClass();
Class<?> cmp = cls.getComponentType();
This works if I pass it an array class normally, but I can't dynamically create a new array from some arbitrary component type.
I am well aware of GWT's lack of reflection; I understand this. However, this seems feasible even given GWT's limited reflection. The reason I believe this is that in the implementation, there exists an inaccessible static method for creating a class object for an array.
Similarly, I understand the array methods to just be type-safe wrappers around JavaScript arrays, and so should be easily hackable, even if JSNI is required.
In reality, the more important thing would be getting the class object, I can work around not being able to make new arrays.
If you are cool with creating a seed array of the correct type, you can use jsni along with some knowledge of super-super-source to create arrays WITHOUT copying through ArrayList (I avoid java.util overhead like the plague):
public static native <T> T[] newArray(T[] seed, int length)
/*-{
return #com.google.gwt.lang.Array::createFrom([Ljava/lang/Object;I)(seed, length);
}-*/;
Where seed is a zero-length array of the correct type you want, and length is the length you want (although, in production mode, arrays don't really have upper bounds, it makes the [].length field work correctly).
The com.google.gwt.lang package is a set of core utilities used in the compiler for base emulation, and can be found in gwt-dev!com/google/gwt/dev/jjs/intrinsic/com/google/gwt/lang.
You can only use these classes through jsni calls, and only in production gwt code (use if GWT.isProdMode()). In general, if you only access the com.google.gwt.lang classes in super-source code, you are guaranteed to never leak references to classes that only exist in compiled javascript.
if (GWT.isProdMode()){
return newArray(seed, length);
}else{
return Array.newInstance(seed.getComponentType(), length);
}
Note, you'll probably need to super-source the java.lang.reflect.Array class to avoid gwt compiler error, which suggests you'll want to put your native helper method there. However, I can't help you more than this, as it would overstep the bounds of my work contract.
The way that I did a similar thing was to pass an empty, 0 length array to the constructor of the object that will want to create the array from.
public class Foo extends Bar<Baz> {
public Foo()
{
super(new Baz[0]);
}
...
}
Baz:
public abstract class Baz<T>
{
private T[] emptyArray;
public Baz(T[] emptyArray)
{
this.emptyArray = emptyArray;
}
...
}
In this case the Bar class can't directly create new T[10], but we can do this:
ArrayList<T> al = new ArrayList<T>();
// add the items you want etc
T[] theArray = al.toArray(emptyArray);
And you get your array in a typesafe way (otherwise in your call super(new Baz[0]); will cause a compiler error).
I had to do something similar, I found it was possible using the Guava library's ObjectArrays class. Instead of the class object it requires a reference to an existing array.
T[] newArray = ObjectArrays.newArray(oldArray, oldArray.length);
For implementing an array concatenation method, I also stepped into the issue of missing Array.newInstance-method.
It's still not implemented, but if you have an existing array you can use
Arrays.copyOf(T[] original, int newLength)
instead.