How To Create an App Like Instagram With a Web Service Backend – Part 2/2

How To Create an App Like Instagram With a Web Service Backend – Part 2/2


This post is also available in: Chinese (Simplified)
Learn how to make a cool photo sharing app with a web service backend!
Learn how to make a cool photo sharing app with a web service backend!
This is a blog post by iOS Tutorial Team member Marin Todorov, a software developer with 12+ years of experience, an independent iOS developer and the creator of Touch Code Magazine.
Ready to go on building your photo-sharing iPhone app?
Last time, in first part of the tutorial, you created the basics of the web service, and added the ability to login with a username/password and upload files.
In this second and final part of the tutorial, you get to do the cool stuff – taking photos, applying effects and uploading files to the server.
Grab some tasty breakfast like mine, and let’s go!

Getting Started: the Photo Screen

All right, it’s time to fire up the iPhone camera, capture some action and submit the results to the server via the API!
Open the tutorial project in Xcode and take a look at Storyboard.storyboard. What you have (already prepared) on the Photo Screen is a UIImageView where you’ll show the preview of the photo taken, a UITextField where the user can enter a title for the photo, and an action button that displays the screen menu. That’s everything you need for your photo app!
Switch to PhotoScreen.m and find btnActionTapped:. It’s empty, so add the code to show a menu of options when the action button is tapped:
[fldTitle resignFirstResponder];
 
//show the app menu
[[[UIActionSheet alloc] initWithTitle:nil
                             delegate:self
                    cancelButtonTitle:@"Close"
               destructiveButtonTitle:nil
                    otherButtonTitles:@"Take photo", @"Effects!", @"Post Photo", @"Logout", nil] 
 showInView:self.view];
First you make sure there’s no on-screen keyboard present. Since there’s only one text field, just calling resignFirstResponder on it should be enough.
Then you show an action sheet with all the possible actions the user can perform:
Photo screen menu
  1. Take Photo – Invoke the standard camera dialog of iOS.
  2. Effects! – You’ll borrow some code from my colleague Jacob Gundersen‘s tutorial on applying effects to images in order to apply a Sepia effect to the user’s photo.
  3. Post Photo – Send the photo to the server using an API call.
  4. Logout – End the user session with the server.
You need to implement a UIActionSheet delegate method to handle taps on the different buttons in the sheet. But before you do that, you need to set up a few methods to handle each of the above actions. Above the @implementation directive, add the private definitions for those methods as follows:
@interface PhotoScreen(private)
-(void)takePhoto;
-(void)effects;
-(void)uploadPhoto;
-(void)logout;
@end
Now add this code to the end of the file to implement UIActionSheet:
-(void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex {
    switch (buttonIndex) {
        case 0:
            [self takePhoto]; break;
        case 1:
            [self effects];break;
        case 2:
            [self uploadPhoto]; break;
        case 3:
            [self logout]; break;
    }
}
Build and run the project, log in and tap on the action button at the right of the tab bar. The menu should look something like this:
Instagram3
Start by implementing the first item: Take Photo.

Be a Photo-Snapping Beast

For those of you who’ve never interacted with the iPhone’s camera programmatically, it’s actually very easy. There’s an Apple standard view controller, which you only need to instantiate, set up and then present modally. Whenever the user takes a photo or cancels the process, callback methods on your class are called to handle the action.
Add the takePhoto method to the end of PhotoScreen.m:
-(void)takePhoto {
    UIImagePickerController *imagePickerController = [[UIImagePickerController alloc] init];
#if TARGET_IPHONE_SIMULATOR
    imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
#else
    imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
#endif
    imagePickerController.editing = YES;
    imagePickerController.delegate = (id)self;
 
    [self presentModalViewController:imagePickerController animated:YES];
}
UIImagePickerController is the view controller that allows the user to use the camera. As with any normal class, you make a new instance.
Next you set up the sourceType property – you can instruct the dialog whether the user should actually use the camera or can choose from the Photos library on the device. In the code above I snuck in some code to detect whether the app is running on the iPhone simulator and if so, the dialog just accesses the Photo library. (Because, you know… there’s no camera on the simulator.)
Setting “editing” to YES allows the user to do simple editing operations on the photo before accepting it.
Finally, you present the camera dialog as a modal view and you leave the user in Apple’s hands. Next time you hear from the user, it’ll be in the methods handling response from the camera dialog.
You will be doing some scaling down and cropping of the images the users take with the camera, so you need to import few handy UIImage categories. Scroll to the top of the file and below the other imports, add this one:
#import "UIImage+Resize.h"
Now implement two UIImagePickerControllerDelegate methods. Add the following to the end of the file:
#pragma mark - Image picker delegate methdos
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
 UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
    // Resize the image from the camera
 UIImage *scaledImage = [image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(photo.frame.size.width, photo.frame.size.height) interpolationQuality:kCGInterpolationHigh];
    // Crop the image to a square (yikes, fancy!)
    UIImage *croppedImage = [scaledImage croppedImage:CGRectMake((scaledImage.size.width -photo.frame.size.width)/2, (scaledImage.size.height -photo.frame.size.height)/2, photo.frame.size.width, photo.frame.size.height)];
    // Show the photo on the screen
    photo.image = croppedImage;
    [picker dismissModalViewControllerAnimated:NO];
}
 
-(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
    [picker dismissModalViewControllerAnimated:NO];
}
Have a look at imagePickerController:didFinishPickingMediaWithInfo: first:
  1. All information about the image is passed via the info dictionary. So you have to first get the image from the dictionary.
  2. Then, you call resizedImageWithContentMode:bounds:interpolationQuality: to get a scaled-down image of the user’s photo. (This method also fixes wrong camera image orientation, as often happens.)
  3. Next you crop the image (which is a rectangle) to a square format by calling croppedImage: on the scaled image. Square photos are all the rage these days and you want only the best for your app :]
  4. You show the scaled and cropped image in the image view.
  5. Finally, you hide the camera dialog.
If the user cancels the process of taking a photo, imagePickerControllerDidCancel: will be invoked. All you actually need to do is dismiss the camera dialog.
That’s it! Fire up the app (either on device or simulator) and try taking some photos.
Note: You’ll only be able to use the camera on a device. If you’re testing the app on the simulator, you need one or more photos in your photo library on the simulator in order to do anything. Plus, if you’ve got your web server on your local machine, you won’t be able to access the web server via the http://localhost URL you specified in API.m. Instead, you’ll have to change the URL to indicate your machine’s IP address.
You can add new photos to the simulator by simply dragging and dropping an image onto the simulator. This will show the image in mobile Safari. Then, simply tap on the image and hold until you get an Action Sheet allowing you to save the image to the simulator’s photo library :]
First photo taken with iReporter

Send My Breakfast Back In Time

This tutorial will take you through implementing only one effect, but you are certainly encouraged to read more on the topic and implement more effects on your own.
The code to follow takes the photo loaded in the image view and applies a sepia effect to it. The code is taken from the Beginning Core Image tutorial, so if you are interested you can read more about it here.
Add the following method below takePhoto in PhotoScreen.m:
-(void)effects {
    //apply sepia filter - taken from the Beginning Core Image from iOS5 by Tutorials
    CIImage *beginImage = [CIImage imageWithData: UIImagePNGRepresentation(photo.image)];
    CIContext *context = [CIContext contextWithOptions:nil];
 
    CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone" 
                                  keysAndValues: kCIInputImageKey, beginImage, 
                        @"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
    CIImage *outputImage = [filter outputImage];
 
    CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
    photo.image = [UIImage imageWithCGImage:cgimg];
 
    CGImageRelease(cgimg);
}
Build and run the project again to see some cool stuff happening! Take a photo, tap the action button and choose Effects! Pretty cool, isn’t it? Thank you, Jacob!
Sepia!

Your Place in the Cloud

To get this part of the application running, you have to step back for a second and finish the API class. Remember how I was saying you would be handling file uploads through the API? But you haven’t implemented the functionality yet. So open API.m and find commandWithParams:onCompletion:
It’s easy to spot the place where the code is supposed to go – there’s a handy comment left in the source code. However, you’re going to make a few amendments to the method body first. Here you go!
At the very beginning of the method body, add these few lines:
NSData* uploadFile = nil;
if ([params objectForKey:@"file"]) {
    uploadFile = (NSData*)[params objectForKey:@"file"];
    [params removeObjectForKey:@"file"];
}
This piece of code checks whether the API command gets a “file” param. If yes, it takes the file parameter out of the params dictionary and stores it separately. This is due to the fact that while all the other parameters are going to be sent as normal POST request variables, the photo contents will be sent separately as a multipart attachment to the request.
Now look at the block where you currently have only the “//TODO: attach file if needed” comment. Replace the comment with the following code:
if (uploadFile) {
   [formData appendPartWithFileData:uploadFile 
                               name:@"file" 
                           fileName:@"photo.jpg" 
                           mimeType:@"image/jpeg"];
}
The code’s pretty simple: you just add the binary contents of the file, a name for the request variable, the file name of the attachment (you’ll always pass photo.jpg for this), and a mime type to the request you’ll be sending to the server.
That’s about everything you need to add to handle file uploads! Easy-peasy.
Go back to PhotoScreen.m and add a new method to the class (the one called upon tapping Post Photo, of course):
-(void)uploadPhoto {
    //upload the image and the title to the web service
    [[API sharedInstance] commandWithParams:[NSMutableDictionary dictionaryWithObjectsAndKeys: 
                                             @"upload",@"command",
                                             UIImageJPEGRepresentation(photo.image,70),@"file",
                                             fldTitle.text, @"title",
                                             nil] 
     onCompletion:^(NSDictionary *json) {
 
  //completion
 
     }];
}
Now that the API supports file uploads, you just pass the parameters to it: the API command is “upload”; as the “file” parameter you pass the JPEG representation of the photo; and the title of the photo is taken from the text field in the user interface.
Now you reach another level of complexity: you authorize the user before they open the Photo screen, but hey! There’s no guarantee that the user is still authorized when they get to the point of uploading a photo.
Maybe the app stayed in the background for half a day, and then the user opened it up again and decided to upload a photo. Or maybe something else. But you don’t know.
So the “upload” call may now fail for more reasons than just network communication errors – it may also fail because the user session expired. You’ll have to handle that in a reasonable way to provide a good user experience.
First, add the following import to the top of the file:
#import "UIAlertView+error.h"
Next, inside the completion block in uploadPhoto, add the code to handle the server response:
 //completion
 if (![json objectForKey:@"error"]) {
 
     //success
     [[[UIAlertView alloc]initWithTitle:@"Success!" 
                                message:@"Your photo is uploaded" 
                               delegate:nil 
                      cancelButtonTitle:@"Yay!" 
                      otherButtonTitles: nil] show];
 
 } else {
     //error, check for expired session and if so - authorize the user
     NSString* errorMsg = [json objectForKey:@"error"];
     [UIAlertView error:errorMsg];
 
     if ([@"Authorization required" compare:errorMsg]==NSOrderedSame) {
         [self performSegueWithIdentifier:@"ShowLogin" sender:nil];
     }
 }
Let’s see: if the result doesn’t have a key called “error”, you assume the call was successful. You show the user a nice alert letting them know the operation was successful.
In the else branch, you store the error message in errorMsg and show it again using an alert. Then you compare the error message to the string “Authorization required” (it’s what the server will return when the user session doesn’t exist) and if that’s the case, you invoke the segue to show the Login screen.
What will happen in that case? The photo the user has taken will remain loaded in the UIImageView, the title will remain in the text field… and if the user authorizes herself successfully with the API, the Login screen will disappear and the user will have another chance to try to upload the photo to the server. Pretty cool!!!
The photo has been successfuly uploaded
You’re almost done with this screen’s functionality.

Get Me Outta This Joint

Last (but not least) the user should always be able to log out. There are two steps to logging the user out:
  1. Destroy the “user” property in the API class.
  2. Destroy the user session on the server side.
Start with the Objective-C code. Add the following to PhotoScreen.m:
-(void)logout {
//logout the user from the server, and also upon success destroy the local authorization
[[API sharedInstance] commandWithParams:[NSMutableDictionary dictionaryWithObjectsAndKeys: 
                                         @"logout",@"command",
                                         nil]
                           onCompletion:^(NSDictionary *json) {
 
                               //logged out from server
                               [API sharedInstance].user = nil;
                               [self performSegueWithIdentifier:@"ShowLogin" sender:nil];
                           }];
}
You send the “logout” command to the server, and upon success you destroy the user data on the iPhone side. Since the user can’t do anything anymore on this screen (when not logged in), you give him the opportunity to login immediately as someone else by invoking the login screen segue.
There’s also a bit of work to be done on the server side. Open index.php from the web project, and add one more case to the “switch” statement:
case "logout":
 logout();break;
Now switch to api.php and at the end add the logout() function that you call from index.php:
function logout() {
 $_SESSION = array();
 session_destroy();
}
Pretty easy! All the per-user data kept on the server side is stored in the $_SESSION array. You erase the data by saving an empty array to $_SESSION, and the data is gone! Poof! You also call session_destroy() to make 101% sure the user session is no more. That’s all you need to do.
Congratulations! You’ve made it pretty far through this kind-of-heavy Objective-C/PHP ping-pong. But hey, glory awaits you! There’s just a little bit more to do and the app will be fully functional!
Fire up the app and play around – you deserve it! Take photos, apply effects, upload images to the server, and even log out and log back in. Cool!
However, there’s no way to see the photos you’ve saved to the server. That’s no fun!
Have no fear! Next, you are going to use that first screen of the app to show the photo stream. (You kind of had to implement the upload functionality first, or there’d be no photos to display in the stream!)

Streaming All Day, All Night

The plan for the stream functionality is to show the last 50 photos uploaded by all users.
Remember that you were so smart as to also generate thumbnails for the uploaded photos? That comes in handy right about now – on the stream screen. You’ll be loading only the thumbnails of the photos and showing a list on the Stream Screen.
But enough with the plans; get to doing! Open index.php (one last time) and add the final case to the switch statement:
case "stream":
 stream((int)$_POST['IdPhoto']);break;
There’s something strange here, right? Why does the stream command of the API take an “IdPhoto” parameter?
You’re going to use the same call for two different purposes. If there are no parameters, the API will return the last 50 photos as planned. If IdPhoto is provided, you’ll return the data for that single photo (i.e. when the user wants to see the full-sized photo of a thumbnail).
Switch to api.php and add the stream() function:
function stream($IdPhoto=0) {
 if ($IdPhoto==0) {
  $result = query("SELECT IdPhoto, title, l.IdUser, username FROM photos p JOIN login l ON (l.IdUser = p.IdUser) ORDER BY IdPhoto DESC LIMIT 50");
 } else {
  $result = query("SELECT IdPhoto, title, l.IdUser, username FROM photos p JOIN login l ON (l.IdUser = p.IdUser) WHERE p.IdPhoto='%d' LIMIT 1", $IdPhoto);
 }
 
 if (!$result['error']) {
  print json_encode($result);
 } else {
  errorJson('Photo stream is broken');
 }
}
As you can see, you’re again going for the simplest solution. You check if the parameter IdPhoto is equal to zero. If yes, you just query the last 50 photos. Otherwise, you try to fetch the requested photo.
You check if there was no error (i.e. big success) and send the result from the database to the iPhone app. Luckily, showing the photos inside the app is not so difficult!
Your goal is to create a listing of photo thumbnails and show them in a table-like layout. When the user taps one of them, you will invoke a segue to open the full-size version of the thumbnail.
You will develop a new custom thumbnail view that will show the photo thumb, the username, and will also automatically calculate its position and react to touches! The final layout of the screen will look like this:
The photo stream
From Xcode’s menu, choose File/New/File…, and select the Objective-C class template. Make the new class inherit UIButton (because you want the new view to handle touches) and call it PhotoView. Open upPhotoView.h and replace everything inside with:
#import <UIKit/UIKit.h>
 
//1 layout config
#define kThumbSide 90
#define kPadding 10
 
//2 define the thumb delegate protocol
@protocol PhotoViewDelegate <NSObject>
-(void)didSelectPhoto:(id)sender;
@end
 
//3 define the thumb view interface
@interface PhotoView : UIButton
@property (assign, nonatomic) id<PhotoViewDelegate> delegate;
-(id)initWithIndex:(int)i andData:(NSDictionary*)data;
@end
Here’s what the above does:
  1. First, you need some constants – since the photos are square, you define only the width of the thumbnail. You have 90px as the thumbnail width and therefore you will have a layout of 3 columns on the screen (you have 270x width for 3 columns plus 20px for the 2 margins between the columns).
  2. Next, you define a protocol for the thumbnail view to communicate with your controller. You have one method the controller has to implement: didSelectPhoto:. Whenever the user taps on a thumbnail, the thumb view will let its delegate know that it was tapped. Then the controller can open up the full-size photo screen. You’ll make the Stream Screen view controller conform to this protocol later on.
  3. Finally, you define the interface of the class. You need a property to hold a reference to the delegate. Since the PhotoView instances are going to be directly added in the view hierarchy of the view controller, which will be the delegate, you use assign for the property.
Your custom initializer for the class will take an index. This index will be used to calculate the row and column on which the thumbnail appears. It’ll also get the photo data and make a request to the server to fetch the full-size photo.
Let’s get to implementing all this!
There are a couple of things to do in PhotoView.m:
//add under #import "PhotoView.h"
#import "API.h"
 
//add under @implementation
@synthesize delegate;
Now you’re going to take a little detour and add one short method to the API class. You want it to give you back the URL of an image on the server by getting the ID of the photo you want to load. Add the method in the respective interface and implementation files of the API class:
//in API.h
-(NSURL*)urlForImageWithId:(NSNumber*)IdPhoto isThumb:(BOOL)isThumb;
 
//in API.m
-(NSURL*)urlForImageWithId:(NSNumber*)IdPhoto isThumb:(BOOL)isThumb {
    NSString* urlString = [NSString stringWithFormat:@"%@/%@upload/%@%@.jpg",
                           kAPIHost, kAPIPath, IdPhoto, (isThumb)?@"-thumb":@""
                           ];
    return [NSURL URLWithString:urlString];
}
Since you’re uploading all images into the “upload” folder, your method generates the full URL, including the server host and path. Also, depending on whether you want to fetch the thumbnail or the full-size format, it takes care to provide the correct file name in the URL returned.
Note: This tutorial covers the basics of a web-back-end/iPhone-client. These basics will give you a good understanding of how to model and design your product. But I need to mention that creating a real-life photo-sharing application takes a bit more expertise, especially when it comes to file storage.
I can’t cover all details here, but I really want to give you some pointers on where you might expect problems with file storage services.
  1. First of all, using the database IDs as filenames is good in order to get things working, but you should have a different approach for a production environment. If you use incremental numbers and store all files in the same folder, it is then relatively easy for someone to fetch all the photos from your server (and you might not necessarily want that).
  2. Furthermore, storing tons of photos on your own server will probably generate a lot of traffic. If you have one of those all-inclusive hosting packs you might not care for that, but from experience I know those types of offers tend to provide variable quality over time. What you’d like to have for a large-scale photo sharing web service is a distributed CDN, so you can provide speed and quality to your users around the world.
  3. Not to be overlooked is the file structure of the files stored. Having 500,000 files in a single folder is not a good idea – this makes it incredibly difficult to administer the content (this also applies when the files are located on a CDN). What you would like to do is distribute the uploaded photos into a balanced tree-like folder structure, so you can easily track the location of a single photo, and so the file folders are a manageable size.
  4. You will also want to check the content of the file being sent over. Storing user files on your servers is always dangerous. At the least, you need to check whether the file has a valid JPG format header, so you know it’s not anything harmful somebody is uploading through the API. Best is to do image processing on the server side, so you make sure you don’t keep the content as it was sent to you.
With that said, let’s go back to your app! You still need to implement the thumbnail view – the only method left is the custom initializer. Add it now in two easy steps. Open PhotoView.m and add:
-(id)initWithIndex:(int)i andData:(NSDictionary*)data {
    self = [super init];
    if (self !=nil) {
        //initialize
        self.tag = [[data objectForKey:@"IdPhoto"] intValue];
 
        int row = i/3;
        int col = i % 3;
 
        self.frame = CGRectMake(1.5*kPadding+col*(kThumbSide+kPadding), 1.5*kPadding+row*(kThumbSide+kPadding), kThumbSide, kThumbSide);
        self.backgroundColor = [UIColor grayColor];
 
        //add the photo caption
        UILabel* caption = [[UILabel alloc] initWithFrame:
                            CGRectMake(0, kThumbSide-16, kThumbSide, 16)
                            ];
        caption.backgroundColor = [UIColor blackColor];
        caption.textColor = [UIColor whiteColor];
        caption.textAlignment = UITextAlignmentCenter;
        caption.font = [UIFont systemFontOfSize:12];
        caption.text = [NSString stringWithFormat:@"@%@",[data objectForKey:@"username"]];
        [self addSubview: caption];
 
  //step 2
    }
    return self;
}
This code should be pretty simple to understand – it’s mostly UI-related. Let’s go over it briefly:
  1. You store the ID of the photo in the tag property for future use.
  2. You calculate the row and the column based on the index of the photo (i.e. the 7th photo in the list is located on the 1st column of the 3rd row).
  3. You then calculate the frame of the view based on its row/column position, using the constants for the thumbnail side and the margin you want to have between the columns.
  4. You then add a UILabel that will show the name of the user who uploaded the photo.
You’ve taken care of the layout! Now continue adding functionality to the thumbnail. Add the following in place of the “//step 2″ comment:
//add touch event
[self addTarget:delegate action:@selector(didSelectPhoto:) forControlEvents:UIControlEventTouchUpInside];
 
//load the image
API* api = [API sharedInstance];
int IdPhoto = [[data objectForKey:@"IdPhoto"] intValue];
NSURL* imageURL = [api urlForImageWithId:[NSNumber numberWithInt: IdPhoto] isThumb:YES];
 
AFImageRequestOperation* imageOperation = 
    [AFImageRequestOperation imageRequestOperationWithRequest: [NSURLRequest requestWithURL:imageURL]
                                                      success:^(UIImage *image) {
                                                          //create an image view, add it to the view
                                                          UIImageView* thumbView = [[UIImageView alloc] initWithImage: image];
                                                          thumbView.frame = CGRectMake(0,0,90,90);
                                                          thumbView.contentMode = UIViewContentModeScaleAspectFit;
                                                          [self insertSubview: thumbView belowSubview: caption];
                                                      }];
 
NSOperationQueue* queue = [[NSOperationQueue alloc] init];
[queue addOperation:imageOperation];
First of all, you handle touches by directly invoking didSelectPhoto: on the delegate (you don’t really need any extra methods in PhotoView).
Then you grab a reference to the shared API instance, get the IdPhoto value out of the photo data passed to the initializer, and finally you call urlForImageWithId:isThumb: (which you added moments ago to the API class) to get the URL of the image on the server.
You’re finally ready to fetch the image from the web and show it inside the view. AFNetworking defines a custom operation to load remote images, so that’s what you’re going to use. All you do is provide an NSURLRequest with the image URL and a block to be executed when the image is fetched. Inside the block, you create a UIImageView, load the fetched image, add the image view to your PhotoView instance and … voila! That’s all!
Well… not quite all. The operation is ready to be executed, but not just yet. In the final couple of lines of code, you initialize a new operation queue and add the operation to the queue. Now that’s really all!
Your custom thumbnail is pretty awesome and it does everything by itself for you. I personally really like having such handy components around. What’s left is to show some of those nifty thumbnails on the Stream Screen!
Open StreamScreen.m (inside the Screens folder) and at the top under the single #import, add a few more:
#import "API.h"
#import "PhotoView.h"
#import "StreamPhotoScreen.h"
You’re including the API class (of course!), your new custom thumb component and the class to show the full-size photo.
You need a couple of private methods in the class, so below the imports also add the private interface:
@interface StreamScreen(private)
-(void)refreshStream;
-(void)showStream:(NSArray*)stream;
@end
You have to make a call to the server and show the thumbs immediately after the user opens the app, so invoke the method doing this in viewDidLoad. At the end of viewDidLoad add:
//show the photo stream
[self refreshStream];
Next add the refreshStream method itself to the end of the file (but before the @end):
-(void)refreshStream {
    //just call the "stream" command from the web API
    [[API sharedInstance] commandWithParams:[NSMutableDictionary dictionaryWithObjectsAndKeys: 
                                             @"stream",@"command",
                                             nil] 
                               onCompletion:^(NSDictionary *json) {
                                   //got stream
                                   [self showStream:[json objectForKey:@"result"]];
                               }];
}
By now, you should be absolutely familiar with what’s going on in this piece of code. Sending the “stream” command to the server API, getting back a list of photos as JSON, and passing the JSON to showStream:.
Next add the implementation for showStream::
-(void)showStream:(NSArray*)stream {
    // 1 remove old photos
    for (UIView* view in listView.subviews) {
        [view removeFromSuperview];
    }
    // 2 add new photo views
    for (int i=0;i<[stream count];i++) {
        NSDictionary* photo = [stream objectAtIndex:i];
        PhotoView* photoView = [[PhotoView alloc] initWithIndex:i andData:photo];
        photoView.delegate = self;
        [listView addSubview: photoView];
    }    
    // 3 update scroll list's height
    int listHeight = ([stream count]/3 + 1)*(kThumbSide+kPadding);
    [listView setContentSize:CGSizeMake(320, listHeight)];
    [listView scrollRectToVisible:CGRectMake(0, 0, 10, 10) animated:YES];
}
  1. First you remove all subviews in the “listView” UIScrollView. You need to do this because you call this same method when the user wants to refresh the photo stream, so there might be photos inside the scroll view already.
  2. You use a for loop over the returned photo records to fetch the photo data, and then create a PhotoView instance with it. Since PhotoView takes care of everything for you, you just need to set the view controller as delegate and add the thumbnail as a subview.
  3. Finally, you update the height of the scroll view. It’s pretty easy since you know how many thumbnails you have in total and how many rows they would occupy. You also scroll the list to the top (where new photos appear).
OK, one final touch! You should have a little warning on the line where you set the thumb’s delegate. Yes, you get it because the view controller does not conform to the required protocol. Quickly switch toStreamScreen.h and fix that:
//under the import clause
#import "PhotoView.h"
 
//at the end of the @implementation line
<PhotoViewDelegate>
Sweet! That fixes it. Hey, there’s further good news – the Stream Screen is now functional! If you’ve already uploaded a few photos to the server, fire up the project and you should see the thumbnails appearing on the main app screen. Congrats! You’ve made it through this long journey!
Tapping on the thumbnails doesn’t really work though, does it? Nope… that was a premature celebration!
You need to add the didSelectPhoto: method to the view controller, in StreamScreen.m. Just call the segue to show the full-size photo. But there’ll be a little extra work to prepare the segue, so you’ll also need a prepareForSegue:sender: method:
-(void)didSelectPhoto:(PhotoView*)sender {
    //photo selected - show it full screen
    [self performSegueWithIdentifier:@"ShowPhoto" sender:[NSNumber numberWithInt:sender.tag]];   
}
 
-(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender {
    if ([@"ShowPhoto" compare: segue.identifier]==NSOrderedSame) {
        StreamPhotoScreen* streamPhotoScreen = segue.destinationViewController;
        streamPhotoScreen.IdPhoto = sender;
    }
}
Here you do a little trick. When a thumbnail is selected, you fetch its tag property, which holds the photo ID, and you make a number out of it, which you then in turn pass as the sender for the segue (very cocky). It’s just a shortcut to send a parameter to the prepareSegue method.
In prepareForSegue, you check if it’s the segue to show the full-size photo (by checking its identifier), and if so, you pass the IdPhoto to the StreamPhotoScreen, which is the target screen for this segue. This should be enough to connect the thumb with the full-size photo screen.
Just a small tweak to the interface – tapping on the refresh button at the top left calls btnRefreshTapped, but the method body is empty. What you need to refresh (aha!) is just to call refreshStream. So go tobtnRefreshTapped and add the following:
[self refreshStream];
Now you can handle showing the stream, and taps on the thumbnails. Also note that the refresh button on the top left corner is connected to refreshStream, so the user can refresh the stream whenever they want.
Not so much is left now, just a little more patience and effort. But things are coming to a wrap, so no worries! :]
If you open StreamPhotoScreen.h, you’ll see the IdPhoto property is already in place, so passing the ID from the Stream Screen to the full-size photo screen is already in place. All you need to do is talk to the server to get the full-size photo, and then just load the full-size photo inside the image view on the screen.
Switch to StreamPhotoScreen.m and make the following changes:
// 1. under the #import clause
#import "API.h"
 
// 2. inside the implementation
-(void)viewDidLoad {
API* api = [API sharedInstance];
 
//load the caption of the selected photo
[api commandWithParams:[NSMutableDictionary dictionaryWithObjectsAndKeys: 
                         @"stream",@"command",
                         IdPhoto,@"IdPhoto",
                         nil] 
 
           onCompletion:^(NSDictionary *json) {
 
               //show the text in the label
               NSArray* list = [json objectForKey:@"result"];
               NSDictionary* photo = [list objectAtIndex:0];
               lblTitle.text = [photo objectForKey:@"title"];
           }];
 
//load the big size photo
NSURL* imageURL = [api urlForImageWithId:IdPhoto isThumb:NO];
[photoView setImageWithURL: imageURL];
}
When the view has loaded, you make the same call to the API as on the Stream Screen, but you also pass the ID of the desired photo as a parameter. If you remember, when this parameter is present the API returns only the data for this particular file.
When you get a response back from the API call, you take the photo data and show the photo title inside the UILabel (created in your storyboard file) lblTitle.
Ah! One final step: to actually load the photo in the image view. Thanks to AFNetworking, it’s pretty easy. AFNetworking defines a category on UIImageView and adds the handy method used above in the code,setImageWithURL:, which fetches a remote image from the web and loads it in the UIImageView.
Now it’s all in place. Fire up the project – see the stream, tap on photos and see their details. Also take photos and upload them! This is an awesome start on your way to creating a killer photo-sharing application!
Full size photo
cd:Marin Todorov http://www.raywenderlich.com/13541/how-to-create-an-app-like-instagram-with-a-web-service-backend-part-22

Comments

Popular posts from this blog

รู้จักกับ Breakpoints ใน Responsive Web Design

IS-IS & OSPF

RIP Routing Information Protocol