In 2020, nearly 1 million new app submissions and app updates were rejected from the App Store for failing to meet the Apple App Store review guidelines.
In most cases, these incidents were minor or unintentional due to a lack of information on the developer’s part. For instance, Jian, a DevelopPaper user, recently shared that their app submission was delayed because the word ‘Official’ appeared in their app title. The App Store review team interpreted the title as misleading and an attempt to deceive users.
As a mobile developer, knowing how the App Store review process works and the common reasons for delayed App Store review time or outright rejection will help you avoid unnecessary disappointments and revenue loss.
Before any new app or update can be published to the App Store, it must comply with all Apple App Store review guidelines.
According to Apple, the standard App Store review time is less than 24 hours, as “90% of submissions are reviewed in less than 24 hours.” However, if your app fails to meet any of the App Store review guidelines, it may be delayed for longer than the times stated.
Following the Epic Games vs. Apple lawsuit in 2020, information that wasn’t previously public about the App Store review process was revealed. During the trial, Trystan Kosmynka, senior director of marketing/App Review, revealed that 100,000 new submissions are made to the App Store every week.
If all goes well, then there’s the human review stage. Apple employs about 500 human app reviewers who thoroughly examine each app in compliance with the App Store review rules to ensure that none are broken. Afterward, the reviewers make a call to either accept, reject, or delay the approval of the app.
Apple constantly updates the App Store Review Guidelines to respond to new data privacy challenges and to make sure that the App Store continuously offers a safe experience for users to get apps. The best way to ensure that your app is accepted and remains on the App Store is to keep up with all of the updates.
Among the recent updates, one of the most prominent changes that you should be aware of is that by June 30, 2022, Apple demands that any app that requires account creation must provide an end-to-end pathway for in-app account deletion. (P.S.: The website App Store Review Guidelines History publishes updates or changes made to the App Store review guidelines to make them easier to spot.)
Despite the many guidelines, Apple recently shared the common reasons why apps are rejected or the App Store review time is delayed. These are the checks you should definitely start with.
This is one of the major reasons why apps are rejected from the App Store. The App Store is only for complete, fully functional, and usable apps that are ready for distribution. As such, Apple rejects any app that it deems incomplete or buggy.
Make sure your app is 100% ready before submission and that it performs exactly as you claim. Your app could be considered unfinished if it promises certain features it doesn’t deliver on. Additionally, if your app contains placeholder content, broken links, or an incorrect version number, Apple reviewers may consider the app unfinished, which could lead to a rejection or delayed App Store review time.
Apple is also infamously unaccommodating of apps that crash or contain significant bugs. During the review, your app will be put through a series of stress and performance tests designed to break it. So, be sure to perform the same level of testing yourself before submission. Use a mobile CI/CD tool like Bitrise to enforce regression checks at every point, test on real devices, and invite beta testers to go through your app before submitting it to the App Store.
Apple reviewers will use an app the way a user would, to carefully confirm that everything works as expected and that the app adheres to Apple’s guidelines on privacy, safety, performance, design, and legal compliance. If they encounter any access hindrances or confusion, the app review process may be delayed, and your app could be rejected.
To prevent that, provide all the information needed to use the app. Detailed setup instructions, user account information, or other information about your app should be included in the App Review Information section of App Store Connect.
The information that users see on your App Store page before installing your app is referred to as metadata. This includes:
Apple frowns on metadata that does not accurately portray the app. Over 48,000 apps were rejected by the App Review team in 2020 for undocumented features, while over 150,000 were rejected for containing misleading information in their metadata.
To prevent your app from being flagged down for such reasons, truthfully describe your app’s capabilities and features. Avoid exaggerating your app’s capabilities or portraying your app as something it’s not. Use screenshots that properly portray the app in use, and if your app has in-app purchases, then state it clearly. Also, endeavor to indicate which features of an app are actually free and which ones require a payment or subscription to unlock. Basically, just be transparent.
Apple considers privacy a fundamental right, which is why over 215,000 apps were rejected in 2020 for privacy violations, such as requesting more user data than they required or misusing user data.
Apple requires app developers to publish privacy policies that specify what user data will be collected and how it will be used or shared. The company also requires that app developers only request access to data that’s relevant to app functionality and obtain user consent before any data collection.
The biggest difference between ChatGPT and Bard is the Large Language Models (LLMs) they are powered by. ChatGPT uses the Generative Pre-trained Transformer 4 (GPT-4), while Bard uses the Language Model for Dialogue Applications (LaMBDA). Also, ChatGPT is developed by OpenAI, while Bard was built by Google.
In terms of capabilities, both do very similar things. Programmers can use ChatGPT for:
Suggestions: Both models can suggest the correct syntax and parameters for functions and other code constructs.
Completion: It can complete code that you have started writing.
Debugging: It can help you identify errors and problems in your code.
Explanation: It can explain the code that it generates, or code you input.
Both models were trained on a massive dataset, including Common Crawl, Wikipedia, books, articles, documents, and content scraped from the internet. However, Bard is a little different in that it was trained on conversations and dialogues from the web, while ChatGPT was trained mostly on scraped general content.
Both products are still under development, Bard a bit more so than ChatGPT. But to really show how these differences actually matter in a practical sense, here’s how they work when tested against each other.
We chose seven categories to test the chatbots against each other, based on what we felt were the most relevant: code generation, problem solving, refactoring code, debugging assistance, third-party plugins/UI extension, ease of use, and cost. Let’s preface this by saying this is not a rigorous scientific comparison, but rather a handful of tests mixed with our hands-on experience to date.
The bread and butter of what we want it for, right? For this, we fed in the following prompt: “Write a function in python that returns a list of 100 prime numbers.”
def get_prime_numbers(n):
"""Returns a list of prime numbers up to n."""
primes = []
for i in range(2, n + 1):
is_prime = True
for j in range(2, i):
if i % j == 0:
is_prime = False
break
if is_prime:
primes.append(i)
return primes
def main():
"""Prints a list of 100 prime numbers."""
primes = get_prime_numbers(100)
print(primes)
if __name__ == "__main__":
main()
Looks solid, but does it work? In our experience, chatbots can sometimes create code that doesn’t run, even going as far as to invent functions! That’s one reason having AI-generated code in production with no human oversight would be very bad. So let’s test it out and see.
def generate_primes(n):
primes = []
sieve = [True] * (n + 1)
for current in range(2, n + 1):
if sieve[current] != 0:
primes.append(current)
for multiple in range(current, n + 1, current):
sieve[multiple] = False
return primes
# Generate first 100 prime numbers
# The 100th prime number is 541, so we set the limit to 550 to make sure we get at least 100 primes
primes_100 = generate_primes(550)[:100]
print(primes_100)
When we run the same prompt by ChatGPT, this code defines a function generate_primes(n) that takes an integer n as input and returns a list of all prime numbers up to n. It uses the Sieve of Eratosthenes algorithm to generate the primes.
Both of these produced a functional result. ChatGPT’s example is better in this case, since it’s cleaner and has much less code. Also, it interpreted our vague prompt correctly. However, this is only one use-case: it’s known that chatbots will vary a lot in their response depending on the language they are using (With less common or documented languages having more errors) or the complexity of the query. So let’s see how Bard and ChatGPT handle different use cases.
Let’s see how of these LLMs handle a problem. Rather than just asking them to write a generic function, let’s throw a programming problem at them from Leetcode:
Write JavaScript code that enhances all arrays such that you can call the array.last() method on any array and it will return the last element. If there are no elements in the array, it should return -1.
Here's how you would do it in JavaScript:
Array.prototype.last = function() {
if (this.length === 0) {
return -1;
} else {
return this[this.length - 1];
}
};
In this code:
Array.prototype.last adds a new method called last to the Array.prototype object. This ensures that all arrays will have this method.
this refers to the array object on which the last method is being called.
this.length gives the number of elements in the array.
If the array is empty (this.length === 0), the method returns -1.
Otherwise, the method returns the last element of the array: this[this.length - 1].
Here is how you would use it:
let array = [1, 2, 3, 4, 5];
console.log(array.last()); // prints: 5
array = [];
console.log(array.last()); // prints: -1
Remember that extending built-in JavaScript objects like Array is generally not recommended, as it can lead to conflicts if the JavaScript environment (like a browser or Node.js) adds a method with the same name. This is generally called monkey patching and is usually avoided in production-grade code. But for quick and dirty hacks, or to understand how JavaScript works, it's a fun tool to use!
Not only does ChatGPT solve the problem, but it explains the problem. Neat! Let’s run the code through leetcode and see if it actually passes the tests, though.
It looks like ChatGPT solved the problem on the first try! Now it’s time for Bard to step up to the plate.
Sure, here is the JavaScript code that enhances all arrays such that you can call the array.last() method on any array and it will return the last element. If there are no elements in the array, it should return -1:
// Define a new method on the Array prototype
Array.prototype.last = function() {
// If the array is empty, return -1
if (this.length === 0) {
return -1;
}
// Otherwise, return the last element
return this[this.length - 1];
};
Here is an example of how to use the array.last() method:
const array = [1, 2, 3, 4, 5];
// Get the last element of the array
const lastElement = array.last();
// Print the last element of the array
console.log(lastElement); // 5
Okay, so we didn’t get as thorough an explanation from Bard about how the code works. However, it’s nearly identical when you compare what ChatGPT and Bard produced. Were they looking at each other’s homework?
Naturally, we can’t take Bard’s word at it, so we run the test through Leetcode, and…
… It works! Which is to be expected, given the answer was nearly identical to ChatGPT’s.
When it comes to problem solving, it’s a tough call between the two. While they produce nearly identical results, ChatGPT explains the code much better for you, which is pretty good for actually learning how to fish instead of having ChatGPT just fish for you. This has been our overall experience with using ChatGPT: not only do you get a possible solution for your problem, but ChatGPT walks you through it a bit more.
Let’s say you want to find a more optimized way to do something. It’s great to get a different viewpoint on your code, and unlike your teammates (assuming you have them), these tools are always free and ready to check over your code. So let’s see how it does! Here’s the sample we provided it.
What is a more optimized way to write this code?
Array.prototype.last = function() {
if (this.length === 0) {
return -1;
} else {
return this[this.length - 1];
}
};
So ChatGPT’s given us a pretty vague response. It vaguely explains the code and suggests a ternary operator, which is fine and worth checking out. However, it feels like it could have done a bit more. Let’s see how Bard handles the same assignment.
Wow! The difference between ChatGPT and Bard is like chalk and cheese: Bard has clearly gone above and beyond. Not only does it offer optimized code, but it shows code to create a benchmark, and shows benchmark results.
Overall, we’ve found Bard is a bit better at refactoring. Part of this is likely because Bard uses search engine information on top of being a Large Language Model (LLM), while ChatGPT is currently just an LLM. However, I should state that ChatGPT is currently beta-testing a “Search with Bing” feature and rolling this out to free users, so ChatGPT may become a whole lot better at refactoring code very soon. But for now, we have to give the win to Bard.
Bugs are part of life. Let’s throw an obviously flawed bit of code at both tools, and see how well it picks it up. See if you can spot it before ChatGPT and Bard do! Here’s the prompt we used: Debug the following code that has an error. Provide code that fixes possible errors with it.
def calculate_average(numbers):
total = 0
for number in numbers:
total += number
average = total / len(numbers)
return average
All right, ChatGPT has given us back a response saying we need to add some logic to prevent a `ZeroDivision` error. It gives an option for doing so and explains the problem. Now it’s Bard’s turn.
Bard found the same problem with the function that ChatGPT did. But once again, Bard has given a much more detailed explanation. It outlines possible errors, explains how to fix them, tells us how to use the function and what the output would be. Whew!
For debugging, we’ve found in general that Bard is much more thorough in its answers and explanations. There have been times where we’ve found ChatGPT has discovered bugs better, but by and large, Bard provides clearer examples to the user.
Bard wins this one, and so we’re tied 2-2. Can one of them break the stalemate?
By connecting a third-party plugin to an LLM, we can extend their capabilities in some wild ways, letting them run code in the chat conversation or integrate with apps like Zapier.
ChatGPT offers over 80 plugins to its premium subscribers as a beta feature right now. To learn about some of these, check out our article: “The top ChatGPT plugins for developers.” Here’s an example of ChatGPT’s plugin store right now:
And here’s an example of Bard’s plugin store:
…Well, I can’t show you anything, because it doesn’t exist! It is rumored to be on the roadmap, but there’s no timeframe as of yet.
If you don’t want to use the web interface, both ChatGPT and Bard offer an API. However Bard’s API is still limited to invite only, so we didn’t get to test it. ChatGPT’s API, however, is very thorough and complete. ChatGPT also has an official mobile app, which is surprisingly useable, and quite handy while ideating.
For this one, we have to give the point to ChatGPT, due to Bard either not having the features yet, or hiding them behind an invite list.
Okay, so upfront, both ChatGPT and Bard are very easy to use. They both have a web interface where you enter a prompt and get a response. Fairly straightforward, right? They also both have “conversations” where they can hold context. However, there are differences between the two.
One big difference is how ChatGPT keeps track of your conversations. They’re stored on the left hand side of the screen, there’s no limit to the length of them, and they’re always accessible. You can also delete them whenever you want.
In comparison, Bard doesn’t allow you to store and access your past conversations. You can access your history and look up what you’ve searched, but you can’t click and restart a conversation like you can with ChatGPT. You can only see what you typed for a prompt. On top of this, Bard limits the length of the conversation, so you have to start over if it goes for too long.
One feature Bard has that ChatGPT doesn’t is the “drafts” feature. In Bard, you have access to a set of drafts so you can review different responses to your prompt, which is helpful. However, even with this, we found ChatGPT easier to use and more powerful.
Any tool needs to have a section on how much it costs, right? ChatGPT has both a free and premium version called ChatGPT Plus, billed at $20 a month. Premium subscribers get access to real-time internet searching features, plugins, better answers from the GPT-4 model, faster response times, priority access to new features, and access during peak times.
In comparison, Bard is free to everyone who has access. Getting this access requires a personal Google Account that you manage on your own, or a Google workspace account for which your admin has enabled access to Bard with (Which can be a bit disappointing if they haven’t).
It’s likely Bard will be commercialized at some point, but given it’s free vs freemium right now, Bard wins by default.
At a score of four to three, ChatGPT wins overall (👑), but in practice both of these tools should be a part of your arsenal. Here are some key points to keep in mind as a developer using these tools:
The base version of ChatGPT is a LLM only, which means the information can be out of date. Bard uses both LLM and search data. This is going to change fairly soon, with ChatGPT implementing “Search with Bing” into its free offering.
ChatGPT is generally better for generating documentation
Bard creates more thorough explanations of code most of the time
Bard limits the length of the conversation, ChatGPT only limits requests over time (GPT-4)
Remember that even if you’re using these tools, it’s important to understand the code you’re working with. Don’t become too reliant on them because the results are not guaranteed to be accurate at any point. Till next time, happy coding!