Background#
When buying breakfast, you may encounter a situation where the Alipay and WeChat QR codes are 贴在一起,and both QR codes are recognized together when scanning. The previous handling might have been: the app internally 判断是自己的 Scheme 时,automatically redirecting; later it was found that when multiple QR codes are recognized, a QR code selection page pops up, and after the user selects a specific QR code, it redirects.
The company's project has never implemented this feature, and recently I had some time, so I organized and added it to the project. Here, I share and record the implementation process.
Process#
The entire process is:
- Recognize QR codes
- If there is only one, redirect directly;
- If there are multiple QR codes, redirect to the QR code selection page;
- The QR code selection page marks the position of each QR code;
- Click on the corresponding QR code position to redirect to the corresponding link.
QR Code Recognition#
The logic for QR code recognition is as follows:
// UIImage + Category
// Recognize QR code image
- (NSArray <CIFeature*> *)imageQRFeatures {
CIImage *ciImage = [[CIImage alloc] initWithCGImage:self.CGImage options:nil];
CIContext *content = [CIContext contextWithOptions:@{kCIContextUseSoftwareRenderer : @(YES)}];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:content options:@{CIDetectorAccuracy : CIDetectorAccuracyLow}];
NSArray *features = [detector featuresInImage:ciImage];
return features;
}
The features
array obtained from the above method has as many elements as there are QR codes. The elements in the features
array are CIQRCodeFeature
objects, which contain the corresponding QR code's position and information.
Check features
, if count > 1
, then iterate through features
, mark the positions of the corresponding QR codes, and generate a new image. It should be noted that the coordinates returned in CIQRCodeFeature
cannot be used directly due to differences in coordinate systems, so conversion is necessary.
The code is as follows:
// Used class
UIImage *targetImage = [UIImage imageNamed:@"Your Image"];
NSArray *features = [targetImage imageQRFeatures];
if ((features) && (features.count > 1)) {
// Indicates that there is more than one QR code
for (CIQRCodeFeature *feature in features) {
firstImage = [firstImage drawQRBorder:firstImage features:feature];
}
}
// UIImage + Category
- (UIImage *)drawQRBorder:(UIImage *)targetImage features:(CIQRCodeFeature *)feature {
CGSize size = targetImage.size;
UIGraphicsBeginImageContext(size);
[targetImage drawInRect:CGRectMake(0.0, 0.0, size.width, size.height)];
// Draw border, the recognized bounds and image's coordinate systems are different, so it needs to be flipped
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip the coordinate system
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -size.height);
UIBezierPath *path = [UIBezierPath bezierPathWithRect:feature.bounds];
// Color of the marking box
[[UIColor colorWithRed:255.0/255.0 green:59.0/255.0 blue:48.0/255.0 alpha:1.0] setStroke];
path.lineWidth = 3.0;
[path stroke];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
After generating the image with the marked QR code positions, it is displayed on a new interface. The next issue is how to determine which specific QR code was clicked. There are two implementation schemes:
- Scheme 1: Add a transparent button to the specified position based on the QR code's position, with a size equal to or larger than the QR code size, and then respond to the button event;
- Scheme 2: Based on the touch event, determine which QR code's frame the touch point is within, and respond to that event.
Implementation process:
Regardless of whether it is scheme one or scheme two, the implementation process requires attention to coordinate system conversion, as well as scaling and offset issues, that is, calculating the scaling ratio based on the actual size of the image and the size it needs to display, then calculating the offset of the position to be displayed according to the ratio, and finally performing scaling and offset processing after coordinate system conversion to obtain the final position.
Thus, the implementation process for obtaining the actual position is as follows:
- Obtain the transform for coordinate system conversion.
- Calculate the scaling ratio based on the display width and the actual width of the image, obtaining the transform to scale.
- Based on the scaling ratio and the image display position, obtain the size of the offset; e.g., if the image is centered, then (screen height - image height * scaling ratio) / 2.0 is the size of the offset.
- Iterate through the
features
array obtained after recognizing the QR codes in the image, and for eachCIQRCodeFeature
element in the array, perform coordinate system conversion, scaling, and offset processing, adding buttons to the final calculated position.
Implementation of Scheme 1:#
After obtaining the final position for scheme one, add a button
at the corresponding position, set the tag, and finally determine which QR code was clicked based on the button's response event.
The code is as follows:
static NSInteger kTagBeginValue = 1000;
- (void)addAlphaButtons {
self.messageList = [NSMutableArray array];
// Transform for coordinate system conversion
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformScale(transform, 1, -1);
transform = CGAffineTransformTranslate(transform, 0, -self.displayImage.size.height);
// Calculate the scaling ratio, display width (screen width) / actual width of the image
CGFloat scaleX = self.view.bounds.size.width / self.displayImage.size.width;
CGFloat scaleY = scaleX;
// Obtain the transform to scale
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scaleX, scaleY);
// If the image is centered, then (screen height - image height * scaling ratio) / 2.0 is the size of the offset
CGFloat offsetY = (zScreenHeight - self.displayImage.size.height * scaleY) / 2.0;
for (CIQRCodeFeature *feature in self.features) {
NSInteger index = [self.features indexOfObject:feature];
if (!IsNilString(feature.messageString) &&
(index != NSNotFound)) {
// Coordinate system conversion
CGRect frame = CGRectApplyAffineTransform(feature.bounds, transform);
// Scaling conversion
frame = CGRectApplyAffineTransform(frame, scaleTransform);
// Offset processing
frame.origin.y += offsetY;
UIButton *tempButton = [UIButton buttonWithType:UIButtonTypeCustom];
tempButton.backgroundColor = [UIColor clearColor];
tempButton.frame = frame;
[self.view addSubview:tempButton];
tempButton.tag = kTagBeginValue + index;
[tempButton addTarget:self action:@selector(handleBtnAction:) forControlEvents:UIControlEventTouchUpInside];
[self.messageList addObject:feature.messageString];
}
}
}
- (void)handleBtnAction:(UIButton *)sender {
NSInteger index = sender.tag - kTagBeginValue;
if (index < self.messageList.count) {
NSString *scanQRStr = self.messageList[index];
if (self.selectScanStrBlock) {
self.selectScanStrBlock(scanQRStr);
[self dismissViewControllerAnimated:NO completion:nil];
}
}
}
Implementation of Scheme 2:#
After obtaining the final position for scheme two, store the position and QR code information in an object, and in the touchesBegin:withEvent:
event, get the clicked point and determine whether the clicked point is within the QR code range, and which QR code range it is in.
The code is as follows:
First, define an object to store the QR code information and position; and define a method to determine whether a point is within the QR code range, allowing for a margin of error (how far outside the QR code is still considered valid).
// WPSSelectScanImageItem.h
@interface WPSSelectScanImageItem : NSObject
@property (nonatomic, strong) NSString *qrcodeStr;
@property (nonatomic, assign) CGRect qrcodeFrame;
// Determine if point is within QR code range
- (BOOL)isPointInQrcodeFrame:(CGPoint)targetPoint;
@end
// WPSSelectScanImageItem.m
#import "WPSSelectScanImageItem.h"
@implementation WPSSelectScanImageItem
- (BOOL)isPointInQrcodeFrame:(CGPoint)targetPoint {
BOOL result = NO;
// Margin of error
CGFloat offsetValue = 10.0;
// Valid range for QR code
CGFloat minX = self.qrcodeFrame.origin.x - 10;
CGFloat minY = self.qrcodeFrame.origin.y - 10;
CGFloat maxX = self.qrcodeFrame.origin.x + self.qrcodeFrame.size.width + offsetValue;
CGFloat maxY = self.qrcodeFrame.origin.y + self.qrcodeFrame.size.height + offsetValue;
// The point to check
CGFloat targetX = targetPoint.x;
CGFloat targetY = targetPoint.y;
// Check if the point is within the QR code range
if ((targetX >= minX) &&
(targetX <= maxX) &&
(targetY >= minY) &&
(targetY <= maxY)) {
result = YES;
}
return result;
}
@end
Then calculate the actual display position of the QR code and store it, as follows:
- (void)initData {
self.qrcodeItemList = [NSMutableArray array];
// Transform for coordinate system conversion
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformScale(transform, 1, -1);
transform = CGAffineTransformTranslate(transform, 0, -self.displayImage.size.height);
// Calculate the scaling ratio, display width (screen width) / actual width of the image
CGFloat scaleX = self.view.bounds.size.width / self.displayImage.size.width;
CGFloat scaleY = scaleX;
// Obtain the transform to scale
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scaleX, scaleY);
// If the image is centered, then (screen height - image height * scaling ratio) / 2.0 is the size of the offset
CGFloat offsetY = (zScreenHeight - self.displayImage.size.height * scaleY) / 2.0;
for (CIQRCodeFeature *feature in self.features) {
NSInteger index = [self.features indexOfObject:feature];
if (!IsNilString(feature.messageString) &&
(index != NSNotFound)) {
// Coordinate system conversion
CGRect frame = CGRectApplyAffineTransform(feature.bounds, transform);
// Scaling conversion
frame = CGRectApplyAffineTransform(frame, scaleTransform);
// Offset processing
frame.origin.y += offsetY;
WPSSelectScanImageItem *item = [WPSSelectScanImageItem new];
item.qrcodeFrame = frame;
item.qrcodeStr = feature.messageString;
[self.qrcodeItemList addObject:item];
}
}
}
Then in the touchesBegin:withEvent:
method, get the clicked point, determine whether the clicked point is within the QR code range, and which QR code range it is in, as follows:
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
[super touchesBegan:touches withEvent:event];
// Get the touch object
UITouch *touch = touches.anyObject;
// Get the touch point
CGPoint touchPoint = [touch locationInView:self.view];
// Check if the touch point is within the QR code range
for (WPSSelectScanImageItem *item in self.qrcodeItemList) {
BOOL isPointInFrame = [item isPointInQrcodeFrame:touchPoint];
if (isPointInFrame) {
if (self.selectScanStrBlock) {
self.selectScanStrBlock(item.qrcodeStr);
[self dismissViewControllerAnimated:NO completion:nil];
}
break;
}
}
}
The overall effect is demonstrated as follows:
The complete code has been placed on Github, address: https://github.com/mokong/MultipleQRHandle.git