I want to mask the numbers of the credit card when the user is taking the picture and leave only the last 4 digits, is it possible?
Thanks,
Shir
Dave from card.io here.
#ShirA is your desire that the while the card is visible in the camera view, before the scan has even completed, all but the last 4 digits should be obscured?
If you think about it, you'll see that this could never be guaranteed.
The user can hold their card in any orientation and at any distance relative to the camera. Only when the card is in roughly the correct position can card.io even determine where the edges of the card are. And then it will take a moment longer for card.io to locate the card numbers themselves. (In some cases, such as non-traditional cards, it will not be able to locate the numbers at all.)
I can imagine a complete redesign of the camera view in card.io that might satisfy you. For one example, the entire center region of the screen could be obscured at all times, so that only the periphery of the card is ever visible.
But we're not currently in a position to redesign card.io to such a degree.
If this is really important to you, the card.io codebase is open-source. You can experiment yourself, and we would be delighted to review a Pull Request with any suggested improvements to card.io.
Related
I made an app that detects objects occupying the smartphone camera and now I want to draw the bounding boxes.
I've seen more than one way to do it and so far I'm planning to use the react-native-canvas library or to create a button in form of a bounding box located in the corresponding coordinates, but I'm wondering what the least resource-intensive solution would be.
This is because object detection already takes up a lot of resources and now I am going to add a function that draws bounding boxes several times per second, so I will surely have to lower the detections per second, but the ideal would be to lower them as little as possible. This is one of those situations where a few fractions of a second will be significant in performance.
I'm pretty new to react native so I need some help finding the optimal solution.
For example plotting buttons without installing an external library and that might work faster, I'm not sure if that makes sense.
Hopefully somebody can point me in the right direction.
Thanks.
I am designing a chess-like game but I believe this question can be extend to all games that have battle.
So my question is, how should I properly label an object as a friend
object or an enemy object in a battle game?
Take chess for example. If player W plays white pieces, then W's opponent, player B, plays black pieces.
For W, all white pieces are friend objects and black pieces are enemy objects, while player B has black friends and white enemies. So here my labeling problem can be simply solved by using colors. When you try to calculate the possible moves of a piece and decide if you can take out a piece, you can basically check if they have the distinct colors.
However, what if two players have the same view? For example, a game can color all friends as black while color all enemies as white, and the players start their games with their pieces all at the bottom. In this sense of speaking, indeed, pieces can be viewed as the attributes of the player. So the coloring of black and white is based on whether a piece has an owner id that is the same as yours.
Here comes a problem:
Although I can decide the color by comparing the ids, what if I want
to know whether a friend piece enters the enemy region?
The definition of enemy region could be the upper part of the board. Since board is not part of the player (while pieces can be), it is not possible to use the previous id solution.
So one possible solution I can think of is to make the board an attribute of the player. However, there are two players, this doubles the data storage because we have actually only one board from an objective perspective. And any moves and further changes on a piece requires operations on two boards, which is also time consuming.
In order not to double the board, I try use to another strategy, which is to use a landscape view. What I mean is that, the data doesn't take sides as players. Players view the board from bottom to top, which is like the portrait view, while data can be viewed from left to right/right to left like what a just referee observes. So the data becomes objective here. But the problem is how to present the objective data in a subjective manner to players? And it is still not easy to know if a piece enters enemy region.
So my question is, how should I properly design the data structure, so that it does not doubles or triples or copies multiple times of the data, and in the meanwhile, it is also easy to know the labels of a public area? In my question, the public area is a chess board, where all pieces should fight on this single board. I call it public because both players can move their pieces to everywhere on the board, and third parties can also see what's going on the board. But the labels of pieces are just subjective attributes, personal private thoughts.
I am not sure if I can properly explain/phrasing my question well so I didn't search for a similar question. Sorry for my laziness. If there is a duplicate question that has been asked, please guide me to it.
---Update---
I think a more clearer question is, let's say I have 3 * 3 board. If I need to move my piece from (1, 2) to (2, 3), then in the perspective of my opponent, the move is not taking place at the same location. It is the opposite. It is from (3, 2) to (2, 1). What is the best way to store this move? Do I have to take one side (subjective) to store it, or is there a neutral/objective way to do this?
The drawback of the subjective way for me is that I need to recalculate the moves for another player to fit his/her view. Using a neutral way might be a life saver.
A general answer without concrete database design - There is data an there are multiple views. There is no need to copy data but to have multiple references to some data (which of course also takes some memory). E. g. a rook is referenced by a square and by a player. Such references may be stored (collection "enemies") or may be calculated on the fly (collection "reachable") depending on runtime/storage design decisions. Also multiple players are just views. On a game move you have to update all static views while dynamic views need none.
For symmetric games there is the possibility not to store the opponents views but generate them by just transforming the whole game into the other players view with every move. Again this is your decision, stored vs. dynamically created.
If I need to move my piece from (1, 2) to (2, 3), then in the perspective of my opponent, the move is not taking place at the same location. It is the opposite. It is from (3, 2) to (2, 1). What is the best way to store this move? Do I have to take one side (subjective) to store it, or is there a neutral/objective way to do this?
Pick one orientation for the board. It doesn't matter which one you pick. This is a neutral/objective way -- neither choice offers any advantage for either player, and the choice doesn't significantly affect the code. Any imbalance you imagine, like that it's better to have (0,0) on your own side, is imaginary.
This is not a "subjective" choice, it is an "arbitrary" choice.
I want to measure the horizontal plane surface to find whether it fits the object that i am going to place. For ex. if i am going to place a cot 3D model(with fixed size) in a room using iOS 11 ARKit,
First i want to detect if that room surface is sufficient or not to place my 3D model by measuring the surface area(width and height etc.)
Second if the user tries to place it without sufficient place, i should not allow him to place the cot and show him error message.
I created a sample POC by following https://developer.apple.com/sample-code/wwdc/2017/PlacingObjects.zip using which i am able to detect the horizontal plane and place the cot. But the issue is whatever may be the surface, user is able to place the cot which shouldn't be allowed in real time.
I saw couple of demos in which they say we can measure the size of the room or a horizontal plane(https://www.curbed.com/2017/6/29/15894556/ar-measure-app-augmented-reality-ruler-measuring-tape-ios)
I am using ARKit Scenekit inorder to achieve this and i am new to AR and Scenekit. I need to know if this is doable, and if so how to achieve it.
You could estimate the size of a detected plane by inspecting its dimensions. But you shouldn't.
ARKit has plane estimation, not scene reconstruction. That is, it'll tell you there's a flat surface at (some point) and that said surface probably extends at least (some distance) from that point. It doesn't know exactly how big the surface is (it's even refining its estimate over time), and it doesn't tell you where there are interruptions in that continuous surface, much less the size and shape of such interruptions.
In fact, if you're looking at the floor and moving around, and you see one patch of floor, then another patch of floor on the other side of a solid wall from the first, ARKit will happily recognize that those two patches are coplanar and merge them into the same anchor. At the same time, neither detected patch may cover the entire extent of the floor around it.
If you attempt to restrict where the user can place virtual objects in AR based on plane estimates, you're likely to frustrate them with two kinds of error: you'll have areas where it looks to the user like they can place something but that don't allow it, and you'll have areas that look like they should be off-limits that do allow placing things.
Instead, design your experience to involve the user in deciding where the sensible places for content are. See this demo for example — ARKit detects the level of the floor (not its boundaries), then uses that to show UI indicating the size/shape of objects to be placed. It's up to the user to make sure there's enough room for the couch, etc.
As for the technical how-to on what you probably shouldn't do: The docs for ARPlaneAnchor.extent say that the x and z coordinates of that vector are the width and length of the estimated plane. And all units in ARKit are meters. (Which is width and which is length? It's a matter of perspective. And of the rotation encoded in the anchor's transform.)
I am envisioning a Google Glass knitting app to help my sister. Many knitters, including her, like to work from knitting charts, which are big grids with special symbols to indicate the type of stitch. You work your way down the chart, one row at a time, knitting the indicated stitches, and voila! you have a fancy sweater or whatever. She often knits while sitting on the bus, or even while walking, and having the chart on Glass would leave her hands free to do what's important.
Let me describe the ideal, but apparently unsupported, interface, and then you can tell me if there's any good substitute. Ideally, there would be a bundle containing 100 numbered rows of knitting symbols (or however long the chart is). The user sees the current row across the middle of the screen, with the rows above and below displayed more dimly. Swiping back and forward would move up and down the chart, vertically scrolling by one row and highlighting the current row. Because there are so many rows, the user needs a way to skip to a particular row, if they are picking up where they left off. I imagine them tapping to bring up a menu that allows them to speak the desired row number.
It appears that this is completely impossible at the moment. Vertical scrolling is not supported; instead, I would need to create a bundle of horizontally scrolling images of sets of three rows, with the one in the middle highlighted to be the "current" one. OK, that's an acceptable substitute. But then how does the user select a particular row? Do I need to give each card a menu allowing them to somehow request a particular row, which then gets sent over the network to the server, which then sends back a new version of the bundle with the desired row toward the beginning? That sounds wasteful, slow and fragile. Does the Glass UI provide any way to handle this kind of data? If not, is it possible that it will handle it in the future?
I can imagine plenty of applications (teleprompter, karaoke, etc.) that involve vertically scrolling through significant numbers of rows, so I'm sure I'm not the only requester of this, which makes me think that maybe, if it's not currently supported, it might be in the future. Thanks.
It sounds like you are interested in filing a feature request with the Glass team. You can do so here by clicking the New Issue button in the top left. In your request, it will be helpful to the Glass team to summarize the use case that you have described in this question.
I am new to WPF and c# as a whole. I have experience from programming language like PHP, HTML and Javascript so I was able to cope quickly.
I have a project that is used to print PINs and Serial Number on a card. Let say the card is A4 and on the paper there are 4 rows and 3 columns of printed cards.
My problem is, I dont really know how to generate the dynamic document and the approach to use. Since the content is not that I have to just make it available on the paper before hand, it has to be placed on a particular position on the paper. Inches calculation will strictly be adhered to.
The link below illustrates an explanation of what I am talking about with border exclusive. All I want to generate is the PIN and serial in this way on a specific paper size.
http://ecloudpack.com/grid.png
Dont get me wrong, speaking of flowDocuments, I think I can bring out something but I am faced with the challenge of precision of position on the paper and making sure the pagination is correct and making sure the margin as specified is what is generated.
I have a Monday deadline and I have been trying.
Is there anyone that can help.