2048 (3x3, 4x4, 5x5) AI
source

2048 (3x3, 4x4, 5x5) AI

(72)
Price
Free
Category
Games Casual Puzzle
Last update
Sep 06, 2021
Publisher
View in store
Loading...

Ratings & Reviews performance

Ratings & Reviews performance provides an overview of what users think of your app. Here are the key metrics to help you identify how your app is rated by users and how successful is your review management strategy.

Number of reviews,
total
72
Avg rating,
total
⭐4.5
Loading...

Description

2108 chars

Classic 2048 puzzle game redefined by AI. Our 2048 is one of its own kind in the market. We leverage multiple algorithms to create an AI for the classic 2048 puzzle game. * Redefined by AI * We created an AI that takes advantage of multiple state-of-the-art algorithms, including Monte Carlo Tree Search (MCTS) [a], Expectimax [b], Iterative Deepening Depth-First Search (IDDFS) [c] and Reinforcement Learning [d]. (a) Monte Carlo Tree Search (MCTS) is a heuristic search algorithm introduced in 2006 for computer Go, and has been used in other games like chess, and of course this 2048 game. Monte Carlo Tree Search Algorithm chooses the best possible move from the current state of game's tree (similar to IDDFS). (b) Expectimax search is a variation of the minimax algorithm, with addition of "chance" nodes in the search tree. This technique is commonly used in games with undeterministic behavior, such as Minesweeper (random mine location), Pacman (random ghost move) and this 2048 game (random tile spawn position and its number value). (c)Iterative Deepening depth-first search (IDDFS) is a search strategy in which a depth-limited version of DFS is run repeatedly with increasing depth limits. IDDFS is optimal like breadth-first search (BFS), but uses much less memory. This 2048 AI implementation assigns various heuristic scores (or penalties) on multiple features (e.g. empty cell count) to compute the optimal next move. (d) Reinforcement learning is the training of ML models to yield an action (or decision) in an environment in order to maximize cumulative reward. This 2048 RL implementation has no hard-coded intelligence (i.e. no heuristic score based on human understanding of the game). There is no knowledge about what makes a good move, and the AI agent "figures it out" on its own as we train the model. References: [a] https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf [b] http://www.jveness.info/publications/thesis.pdf [c] https://cse.sc.edu/~MGV/csce580sp15/gradPres/korf_IDAStar_1985.pdf [d] http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-8.pdf

Screenshots

https://is1-ssl.mzstatic.com/image/thumb/Purple114/v4/06/33/98/06339813-e732-72ac-bbd2-74963c6f7364/bfac8ab1-72f2-4fab-8196-d87db6d8b5ad_pad_1.png/2048x2732bb.pnghttps://is2-ssl.mzstatic.com/image/thumb/Purple124/v4/8a/07/cb/8a07cb38-73ac-c954-ec33-f3a73787e9fc/daa63c49-6248-43db-8f62-0e1f54371ccf_pad_2.png/2048x2732bb.pnghttps://is1-ssl.mzstatic.com/image/thumb/Purple114/v4/98/b2/d6/98b2d683-3a7a-48c0-a756-8f3b26bf27d2/3d6879ae-b897-4aeb-9050-61dd621702dc_pad_3.png/2048x2732bb.pnghttps://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/b0/29/0b/b0290be7-5a86-2c06-2adb-4f1ff1236526/9c5c6c80-e89a-42ca-97b3-64b29e263e79_pad_4__U0028v1.6_U0029.png/2048x2732bb.png
Loading...
Loading...

Find growth insights on our blog

React to user feedback and market trends faster