mypy TypeVar include bound class as well as subclasses - static

class Super:
#classmethod
def instantiate(cls) -> What goes here?:
return cls()
class Sub(Super):
pass
class Sub2(Super):
pass
When writing typehints, what's the canonical way to say a method should return an instance of the super class or any of its subclasses. Closest I could find is TypeVar("Super", bound="Super") but this still raises an error for the Super class in mypy

here you can use typing.Type to type-hint cls like this:
from typing import TypeVar, Type
C = TypeVar("C")
class Super:
#classmethod
def instantiate(cls: Type[C]) -> C:
return cls()
class Sub(Super):
pass
class Sub2(Super):
pass
reveal_type(Sub.instantiate()) # note: Revealed type is 'tmp.Sub*'
reveal_type(Sub2.instantiate()) # note: Revealed type is 'tmp.Sub2*'

Related

Unable to call method from extended class

I have a Parent class that I extended into a child class, and the goal is to reuse an already created working functionality, but it seems an error occurred when using the parent class method
from sr import BaseClient
class Client(BaseClient):
def __init__(self, credentials=None):
....
def _request(self,path: str, params):
pass
class Address(Client):
def get_address(self, **kwargs):
return self._request('path', kwargs)
Running test
import pytest
from .... import Addresses
def test_get_address():
res = Addresses.get_address({'query': 'some',.....other here})
The error is pointing to the parent method called from the child class(Addresses)
FAILED tests/test_api.py::test_get_addresses - AttributeError: 'dict' object has no attribute '_request'
This is because you're trying to call get_address as a static method, and the dict you're giving to the function ({'query': 'some',.....other here}) is passed as the self argument. This is why you get an error that a dict has no attribute _request, because it doesn't.
Perhaps you meant to instantiate the class first, like Address().get_address()?

How to debug serializable exception in Flink?

I've encountered several serializable exceptions, and I did some searching on Flink's internet and doc; there are some famous solutions like transient, extends Serializable etc. Each time the origin of exception is very clear, but in my case, i am unable to find where exactly it is not serialized.
Q: How should i debug this kind of Exception?
A.scala:
class executor ( val sink: SinkFunction[List[String]] {
def exe(): Unit = {
xxx.....addSink(sinks)
}
}
B.scala:
class Main extends App {
def createSink: SinkFunction[List[String]] = new StringSink()
object StringSink {
// static
val stringList: List[String] = List()
}
// create a testing sink
class StringSink extends SinkFunction[List[String]] {
override def invoke(strs: List[String]): Unit = {
// add strs into the variable "stringList" of the compagin object StringSink
}
}
new executor(createSink()).exe()
// then do somethings with the strings
}
The exception is:
The implementation of the SinkFunction is not serializable. The
object probably contains or references non serializable fields.
Two suspicious points that I found:
The instance of StringSink is passed into another file.
In the class of StringSink, it uses a static variable stringList
of its compagin object.
I faced similar problems. It used to take longtime to find out what member/object is not serializable. The exception logs are not really helpful.
What helped me is the following JVM option, which enables more details in exception trace.
Enable this option...
-Dsun.io.serialization.extendedDebugInfo=true
My first guess would be the you don't have a no argument constructor in StringSink
Rules for POJO types Clipped from here
Flink recognizes a data type as a POJO type (and allows “by-name” field referencing) if the following conditions are fulfilled:
The class is public and standalone (no non-static inner class)
The class has a public no-argument constructor
All non-static, non-transient fields in the class (and all superclasses) are either public (and non-final) or have a public getter- and a setter- method that follows the Java beans naming conventions for getters and setters.
Just add a no argument constructor and try again
class StringSink extends SinkFunction[List[String]] {
public StringSink() {
}
#override def invoke(strs: List[String]): Unit = {
// add strs into the variable "stringList" of the compagin object StringSink
}
}

Scala Array Generics vs Vector Generics

So I have been working on a refactor in my project to convert Vector's in my code to Array's. The reason being that my application needs to be very performant, and using while-iterations on Array's is significantly faster than for comprehensions and iterations on Vector's. (see this blog post for details)
However I have run into an issue I can't seem to easily find an answer for. I have tweaked my code to hide implementation details, and just boil down to the code needed to highlight the issue.
The following class structure, when using Vector, compiles totally fine:
sealed abstract class BaseClass {
def element: Int
}
sealed abstract class TypeA extends BaseClass {
def element = 2
def children: Vector[BaseClass]
}
case class TypeB(element: Int = 2) extends BaseClass
case class TypeAA(children: Vector[TypeA]) extends TypeA
case class TypeAB(children: Vector[TypeB]) extends TypeA
Now, when switching from using Vector to using Array, it no longer compiles:
sealed abstract class BaseClass {
def element: Int
}
sealed abstract class TypeA extends BaseClass {
def element = 2
def children: Array[BaseClass]
}
case class TypeB(element: Int = 2) extends BaseClass
case class TypeAA(children: Array[TypeA]) extends TypeA
case class TypeAB(children: Array[TypeB]) extends TypeA
I get the error: overriding method children in class TypeA of type => Array[BaseClass]; value children has incompatible type for both TypeAA and TypeAB classes.
I have a feeling I need to do an implicit conversion somewhere, but I am relatively new to Scala (only a couple months) and am not sure exactly how to use it.
Sorry if this has been asked elsewhere, I had trouble finding the correct search terms for the issue i am having.
I think you need to use _ <: BaseClass instead of using the generic type itself:
sealed abstract class TypeA extends BaseClass {
def element = 2
def children: Array[_ <: BaseClass]
}
This happens because the generic type parameter in Array is invariant, while in Vector it is covariant:
final class Array[T]
final class Vector[+A]
Hope that helps you.
This comes from the fact Vector[+A] is covariant in its type parameter A while Array[A] is not, meaning invariant in A
You can work around that fact using F-Bounded Polymorphism:
sealed abstract class TypeA[A :< Type[A]] extends BaseClass {
def children: Array[A]
}
case class TypeAA(children: Array[TypeA]) extends TypeA[TypeA]
case class TypeAB(children: Array[TypeB]) extends TypeA[TypeB]

How do I have every test in class automatically tagged with a specific tag

I am using the flatspec trait to create my tests and I would like to create a base class that would automatically tag any tests in that class with a particular tag.
For example, any tests in classes that inherit from the IntegrationTest class would automatically be appropriately tagged. So instead of:
class ExampleSpec extends FlatSpec {
"The Scala language" must "add correctly" taggedAs(IntegrationTest) in {
val sum = 1 + 1
assert(sum === 2)
}
I would like do this and still have the test tagged as an IntegrationTest
class ExampleSpec extends IntegrationSpec {
"The Scala language" must "add correctly" in {
val sum = 1 + 1
assert(sum === 2)
}
Thanks!
If you're willing to use a direct annotation on the test class, rather than a parent class, you can use the example at https://github.com/kciesielski/tags-demo. Adapted somewhat for your example, you need to declare a Java class:
package tags;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.ElementType.TYPE;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
#org.scalatest.TagAnnotation
#Retention(RUNTIME)
#Target({METHOD, TYPE})
public #interface MyAnnotation {
}
Then you use it to annotate the Scala test class:
#tags.MyAnnotation
class ExampleSpec extends FlatSpec {
"The Scala language" must "add correctly" in {
val sum = 1 + 1
assert(sum === 2)
}
You then have to use the actual string tags.MyAnnotation to specify the tag you want run (or ignored).
I tried to annotate a parent class instead, but I can't get it to work. I could imagine it being a significant problem for you or not, depending on what else you're trying to do.
Actually, the online doc for the org.scalatest.Tag class does a fair job of describing all this, although I say it after getting it to work by following the above project on GitHub..
Since ScalaTest 2.2.0 tags can be inherited (http://www.scalatest.org/release_notes/2.2.0).
Add #Inherited to your annotation definition.
package tags;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.ElementType.TYPE;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
**#Inherited**
#org.scalatest.TagAnnotation
#Retention(RUNTIME)
#Target({METHOD, TYPE})
public #interface RequiresIntegrationStuff {
}
Annotate your base spec.
#RequiresIntegrationStuff
class IntegrationSpec extends FlatSpec {}
Just use your base spec as a base class.
class ExampleSpec extends IntegrationSpec {
"The Scala language" must "add correctly" in {
val sum = 1 + 1
assert(sum === 2)
}
After that, ExampleSpec will be tagged as tags.RequiresIntegrationStuff.
You will find working project here: https://github.com/wojda/tags-demo (based on https://github.com/kciesielski/tags-demo from Spiro Michaylov's answer)

Multi-class API + Endpoints Proto Datastore

When separating the API classes into multiple files, the API explorer shows the same request definition for all resources.
So based on the structure shown below (my apologies if it's too long), in the API explorer, both my_api.api_a.test and my_api.api_b.test show the same attribute, attr_b, which is the last in the api_server list definition. If I change it and put ApiA last, then both methods show attr_a.
Any idea what am I doing wrong
# model/model_a.py
class A(EndpointsModel):
attr_a = ndb.StringProperty()
# model/model_b.py
class B(EndpointsModel):
attr_b = ndb.StringProperty()
# api/__init__.py
my_api = endpoints.api(name='my_api', version='v1')
# api/api_a.py
#my_api.api_class(resource_name='api_a')
class ApiA(remote.Service):
#A.method(name='test', ...)
...
# api/api_b.py
#my_api.api_class(resource_name='api_b')
class ApiB(remote.Service):
#B.method(name='test', ...)
...
# services.py
from api import my_api
application = endpoints.api_server([ApiA, ApiB])
Also tried to define the api_server as shown below, but didn't work at all.
application = endpoints.api_server([my_api])
I've noticed similar issues (which might be a bug in the endpoints-proto-datastore libary) when the actual method names (not the name in the decorator) are the same in different api classes.
Doesn't work:
class ApiA(remote.Service):
#A.method(...)
def test(self, model):
...
class ApiB(remote.Service):
#B.method(...)
def test(self, model):
...
Works:
class ApiA(remote.Service):
#A.method(...)
def test_a(self, model):
...
class ApiB(remote.Service):
#B.method(...)
def test_b(self, model):
...
You skipped those lines in your sample, but the behaviour you state matches what I encountered in this scenario.

Resources