ASO myths that keep coming up in 2026
Table of Content:
- Blaming the algorithm is usually a way of not doing the work
- Calling ASO "SEO for apps" causes more problems than it solves
- Ranking for more keywords is easy to show and hard to justify
- Creative testing is only useful if you know what you're trying to learn
- ASO doesn't really have a point where you're done
- A few questions from the session
- FAQ
AppFollow recently ran a webinar on ASO myths, and it covered a lot of ground worth writing down. The panel included Katherine, Head of ASO at MY.GAMES; Marina Roglic, Head of ASO, who works in mobile gaming at TapNation; and Jimmy, App Marketing Manager at Toca Boca. Ilya from AppFollow moderated.
TL;DR: There's a version of ASO work that looks fine from the outside. Keywords are in, screenshots exist, the app is live, and whoever is responsible for it can point to a ranked keyword chart and call it a day. But a lot of teams are running on assumptions that don't really hold up when you look at them, and the gap between doing ASO adequately and doing it well is bigger than most people realize.
We also recommend watching the entire webinar here:
Blaming the algorithm is usually a way of not doing the work
When downloads drop, the first place a lot of teams look is the algorithm. What else could it be, at first glance?
Something changed; the store is behaving differently, and there's nothing to be done about it. And yes, sometimes that's what happened. The App Store algorithm changed twice last year, and when it does change, it can affect performance in real ways.
But most of the time that's not what's going on, and using it as the default explanation means nobody goes looking for the real cause. The algorithm is a convenient thing to point at because it's external and invisible, and there's no follow-up action required (very important bit!).
The more useful response to a drop is to treat it like a diagnostic problem.
Start with traffic sources. Did something change in search, browse, paid, or collections? Then look at the category more broadly. A competitor might have started bidding harder on a keyword you rely on. A new app might have appeared and started pulling users who would have found yours. One of your competitors might have changed its category entirely, which can affect how the store treats nearby apps. These are all findable things if you go looking.
Split the data by country, by traffic source, by channel, and figure out where the drop is coming from before drawing any conclusions. If browse traffic fell, maybe you lost a placement somewhere, and that's most of the story. If conversion dropped, that's a different kind of problem and needs a different response. Once you've done that, it's reasonable to check whether anyone else in the industry is seeing something similar. But that's the second step, not the first.
Pointing at the algorithm and leaving it there doesn't change anything. Downloads don't come back because you named a cause.
Calling ASO "SEO for apps" causes more problems than it solves
For a long time, this was the easiest way to explain ASO to someone who didn't know what it was. It's still not a terrible shorthand if you're at a dinner party and someone asks what you do. But inside a team, treating ASO and SEO as roughly the same discipline tends to narrow what people think the job involves.
The goals are different in ways that matter. In SEO, traffic is the main metric. Getting people to click is a win. In ASO, a download is what you're after, and the relationship between impressions and downloads is more loaded than in web search. If you're getting a lot of impressions on a keyword but not many downloads, that can actually signal to the algorithm that the keyword wasn't a good match for your app. High traffic without conversion can make things worse.
The constraints are also different: On Apple, you have something like 160 characters across title, subtitle, and keyword field. There's no room for the kind of content that works in web search. You can't build a narrative, and every character has to do something specific.
And there are parts of ASO that don't map onto SEO at all. Ratings show up right next to the icon before a user does anything else. A 3.0 rating affects how many people download the app in a way that a bad review on some third-party site usually doesn't affect search rankings. Custom product pages, in-app events, the way your store's creative connects to paid campaigns, these things don't have clean equivalents in web search.
The risk of leaning on the SEO comparison is that it pulls focus toward visibility and not much else. Getting people to see the app is one part of the job. What happens after that is at least as important, and if the mental model is "keywords and ranking," a lot of what drives downloads gets overlooked.
Ranking for more keywords is easy to show and hard to justify
There's a version of keyword reporting that looks good in a slide. A growing list of ranked terms, broader coverage across the category, positions improving over time. It's a satisfying chart to show and relatively straightforward to produce.
The problem is that none of it tells you whether any of it is working in a meaningful sense. If the keywords have low traffic and conversion through them is thin, the number is just a number. It demonstrates activity more than results.
Relevance is what determines whether a keyword is worth having. On iOS, the character limits force some discipline anyway, but on Google Play, there's more room, and that flexibility can make the problem worse. When you can add more, the temptation is to add more, and the result is sometimes a long list of terms that look like coverage but aren't doing anything.
A more grounded way to track keyword performance is to look at impressions alongside installs and conversion rate. When you change a keyword set, impressions should go up. Conversion shouldn't fall at the same time. If both things are moving in the right direction, the change was probably worth making. If impressions go up and conversion drops, something is off.
One thing that gets skipped a lot: change keywords separately from other things. If you update the keyword set and redesign the screenshots at the same time, you won't be able to work out which one drove whatever change you see. It sounds obvious, but it's a pretty common mistake.
Creative testing is only useful if you know what you're trying to learn
Running screenshot tests without a clear idea of what you're testing is something that happens a lot, particularly on teams that are newer to ASO. You update the icon, try a different color, follow some design trend you saw in a competitor's store page, and then see what the numbers do. If they go up, great. If they don't, you try something else.
The issue is that without a defined hypothesis, you can't really learn anything from the result either way. You end up with a history of tests but no accumulated understanding of what your users respond to or why.
Before running a test, it's worth knowing what specifically you're trying to find out, what a good result looks like, and what you'd do with a negative result. Testing whether showing a new feature in the first few screenshots improves conversion is a testable idea with a clear output. Testing whether the screenshots look nicer is not.
It's also reasonable to be skeptical of what the platform tells you when a test concludes. Test results from App Store and Play Store aren't perfectly reliable, and the confidence levels the platforms show are there for a reason. A result that looks positive in a 50/50 test can behave differently once the change goes live to everyone. Checking performance with two weeks of real data after rollout gives a more honest picture, and it's worth building that into the process rather than treating the platform's conclusion as final.
One simple thing worth trying when you're evaluating screenshots: convert them to black and white and see where your eye lands. When there's too much competing for attention, the black and white version tends to make that obvious. Effective store creative usually focuses on one or two things clearly. Screenshots that try to communicate everything at once often end up communicating nothing particularly well.
How often should you be testing? Probably more often than most teams do. There's always something to try, and the category around your app is always changing. If a test shows something promising, let it run long enough to cover a full week of days, since user behavior on a Sunday is different from a Wednesday and shorter tests can miss that.
ASO doesn't really have a point where you're done
A lot of apps get a reasonable amount of ASO work done around launch, and then it sits mostly untouched until there's a major release or something goes wrong. The metadata stays the same, screenshots age out, keyword performance drifts without anyone noticing. The app is still there and still ranking for something, but the gap between where it is and where it could be tends to grow quietly.
If you're not running tests and updating things, someone else in your category probably is. If they find something that improves conversion or ranking, they move up. Positions in the store aren't permanent, and they don't default to whoever got there first.
Promo content and in-app events are a part of this that tends to get less attention than keywords and screenshots, but they matter, especially on Google Play, where promotional content has become more important for browse and explore traffic over the past couple of years. These things need regular attention, and the way they perform across different regions and languages is worth tracking over time.
Keywords probably need revisiting at least quarterly to check whether something has shifted. Screenshots and other creative elements can be tested more frequently. The cycle doesn't have to be intense, but there should be a cycle. The teams that treat ASO as something that runs continuously tend to be in a better position than the ones that treat it as a project with an end date.
The word "optimization" is sitting right there in the name.
A few questions from the session
Which ASO tools are worth using right now?
AppFollow, AppTweak, and Appfigures are the main options. The core search data is broadly similar across tools, and it mostly comes down to which interface fits how your team works.
Screenshots improve, but conversion still drops. What's going on?
"Better quality" and "better performing" aren't the same thing. A flight search company where the creative that worked best wasn't beautiful at all, just a white background with text listing destination prices. The audience's priorities determined what worked, not aesthetic quality. Understanding who the user is and what they care about matters more than making something that looks polished.
How do you handle keyword repetition in Google Play descriptions?
The concern isn't really whether to repeat or not, it's whether the listing still reads as a coherent, useful piece of text. Place the most important keywords early, especially in the first paragraph, since that's what shows on the web. Around 5 to 7 keywords, with a few of them repeated a handful of times, and fewer repetitions for the rest. Then test it, check the results a month later, and adjust.
How do you know if the app store creative test worked?
Don't just rely on what the platform tells you at the end of the test. A result that looks positive in a 50/50 test can behave differently once it's live to everyone. Verifying with two weeks of real data after rollout gives a clearer picture. Also, run tests long enough to cover different days of the week, since user behavior shifts across the week in ways that can skew shorter tests.
Many asked about B/A/A (A/B/B) tests. We could add that while in theory, they sound like a good tool to check the relevance of tests, in practice, they're just as broken as A/B/C tests. The best course of action might be:
- Give a vote of confidence to the rest of the results when they come in,
- Apply the winning option
- Keep a close eye on analytics and keep in the loop with what UA and product are doing
- Run a B/A test after a week or so to confirm the decision, but still keep an eye on analytics
We don't exist in a vacuum; each player has their own influence on store convertibility. Find a method that works for you, never feel bad about asking peers for input!
cta_get_started_purple
FAQ
What should I check first when app store downloads drop?
Split the data before you do anything else. Look at which traffic source changed, which countries are affected, and whether it's impressions or conversion that moved. That tells you what kind of problem you're dealing with. A drop in browse traffic might mean you lost a collection placement. A drop in conversion is a different issue. Find where the problem actually is before assuming a cause.
How often should ASO metadata be updated?
There's no universal answer, but treating it as something you revisit quarterly is a reasonable baseline for keyword-level work. Creative testing can happen more frequently. In-app events and promo content are often tied to release schedules or seasonal moments, so those get updated more regularly by nature. The main thing is having a regular cycle rather than only updating when something breaks.
Does keyword volume matter on Google Play?
Repetition in Google Play descriptions is a real tactic, but it stops being useful when the listing becomes hard to read. A rough guide is to focus on 5 to 7 core keywords, repeat the most important ones a few times, and make sure the first paragraph reads clearly, since that's what shows up on the web. Test it, check the results after a month, and adjust based on what you see.
What makes a keyword worth targeting?
The keyword should be bringing impressions, it should be ranking somewhere in the top 10, and people who come through it should be downloading. If a keyword is indexed but traffic is low, and nothing converts through it, there's not much value there. Relevance to what the app actually does tends to be a better filter than search volume alone.