YAKE LogoYAKE!

Getting Started

Quick Start Guide

Get YAKE! up and running in less than 5 minutes. Follow these simple steps to start extracting keywords from your texts.

Open In Colab


📦 Installing YAKE!

Installation

YAKE! requires Python 3.7+ and can be installed directly from GitHub.

uv pip install git+https://github.com/INESCTEC/yake

💡 Optional: Lemmatization Support

To use lemmatization features (aggregate keyword variations like "tree/trees"), install the optional dependencies:

uv pip install yake[lemmatization]

Then download the required language models:

# For spaCy (recommended)
python -m spacy download en_core_web_sm
# For NLTK (alternative)
python -c "import nltk; nltk.download('wordnet'); nltk.download('omw-1.4')"

🚀 Usage (Command Line)

Basic Command Structure

yake [OPTIONS]

Available Options

OptionTypeDescription
-ti, --text_inputTEXTInput text (must be surrounded by single quotes)
-i, --input_fileTEXTPath to input file
-l, --languageTEXTLanguage code (e.g., 'en', 'pt', 'es')
-n, --ngram-sizeINTEGERMaximum n-gram size (default: 3)
-t, --topINTEGERNumber of keywords to extract (default: 20)
-df, --dedup-funcCHOICEDeduplication method: leve, jaro, or seqm
-dl, --dedup-limFLOATDeduplication threshold (0.0-1.0)
--lemmatizeFLAGEnable lemmatization (default: False)
--lemma-aggregationCHOICEAggregation method: min, mean, max, or harmonic (default: min)
--lemmatizerCHOICEBackend: spacy or nltk (default: spacy)
-v, --verboseFLAGShow detailed scores

💡 Example Commands

# Extract top 10 keywords from text
yake -ti 'Your text goes here. YAKE will extract the most important keywords.' -t 10 -v

🔄 Keyword Deduplication Methods

Why Deduplication?

YAKE may extract similar keywords (e.g., "machine learning" and "machine learning algorithm"). Deduplication merges similar keywords to avoid redundancy.

YAKE uses three methods to compute string similarity during keyword deduplication:

L

Levenshtein

leve

Measures edit distance between strings — operations needed to transform one string into another.

Very accurate for small changes
Medium speed
💡Best for typos

1 - distance / max_len

J

Jaro

jaro

Measures similarity based on matching characters and their relative positions.

Tolerant of transpositions
Fast performance
💡Best for names

jellyfish.jaro_similarity()

S

Sequence

seqm

Uses Python's built-in difflib to find matching blocks in strings.

Good for shared blocks
Fast performance
💡Best for longer texts

2 * matches / total_len

📊 Comparison Table

MethodBased onBest forPerformance
leveEdit operationsTypos and small changesMedium
jaroMatching positionsNames with swapsFast
seqmMatching blocksLonger stringsFast

💡 Recommendation

For most use cases, leve (Levenshtein) provides the best balance between accuracy and performance. Use jaro for proper names or short strings, and seqm for longer texts with shared subsequences.


1. leve — Levenshtein Similarity

  • What it is: Measures the edit distance between two strings — how many operations (insertions, deletions, substitutions) are needed to turn one string into another.
  • Formula used:
similarity = 1 - Levenshtein.distance(cand1, cand2) / max(len(cand1), len(cand2))
  • Best for: Very accurate for small changes (e.g., "house" vs "horse")
  • Performance: Medium speed

2. jaro — Jaro Similarity

  • What it is: Measures similarity based on matching characters and their relative positions
  • Implementation: Uses the jellyfish library
  • Best for: More tolerant of transpositions (e.g., "maria" vs "maira")
  • Performance: Fast

3. seqm — SequenceMatcher Ratio

  • What it is: Uses Python's built-in difflib.SequenceMatcher
  • Formula:
ratio = 2 * M / T

where M is the number of matching characters, and T is the total number of characters in both strings.

  • Best for: Good for detecting shared blocks in longer strings
  • Performance: Fast

Comparison Table

MethodBased onBest forPerformance
leveEdit operationsTypos and small changesMedium
jaroMatching positionsNames and short strings with swapsFast
seqmCommon subsequencesGeneral phrase similarityFast

Practical Examples

Compared Stringslevejaroseqm
"casa" vs "caso"0.750.830.75
"machine" vs "mecine"0.710.880.82
"apple" vs "a pple"0.80.930.9

Recommendation: For general use with a good balance of speed and accuracy, seqm is a solid default (and it is YAKE's default). For stricter lexical similarity, choose leve. For names or when letter swaps are common, go with jaro.

Lemmatization

YAKE supports lemmatization to aggregate keywords with the same lemma, reducing redundancy from morphological variations like "tree"/"trees" or "run"/"running"/"runner".

When lemmatization is enabled, YAKE combines morphological variations and aggregates their scores using one of four methods:

1. min — Best Score (Default)

  • What it is: Selects the keyword with the lowest (best) score from all morphological variations
  • Formula: final_score = min(score_tree, score_trees)
  • Best for: Most cases — selects the most relevant form
  • Performance: Fast

2. mean — Average Score

  • What it is: Averages scores across all morphological variations
  • Formula: final_score = sum(scores) / len(scores)
  • Best for: When all forms are equally important
  • Performance: Fast

3. max — Worst Score

  • What it is: Uses the highest (worst) score — most conservative approach
  • Formula: final_score = max(score_tree, score_trees)
  • Best for: Conservative filtering to ensure only high-quality keywords
  • Performance: Fast

4. harmonic — Harmonic Mean

  • What it is: Calculates the harmonic mean of all scores
  • Formula: final_score = n / sum(1/score for score in scores)
  • Best for: Balanced approach between min and mean
  • Performance: Fast

Comparison Table

MethodBased onBest forExample: "tree" (0.05) + "trees" (0.08)
minLowest scoreGeneral use - best variant0.05
meanAverageAll forms equally important0.065
maxHighest scoreConservative filtering0.08
harmonicHarmonic meanBalanced combination~0.061

Practical Example

import yake
 
text = "Trees are important. Many trees provide shade. Tree conservation matters."
 
# With lemmatization
kw_extractor = yake.KeywordExtractor(
    lan="en",
    n=1,
    lemmatize=True,  # Enable lemmatization
    lemma_aggregation="min"  # Use best score (default)
)
 
keywords = kw_extractor.extract_keywords(text)
# Results combine "tree", "trees" into a single entry

Installation: Requires spaCy or NLTK:

pip install yake[lemmatization]
python -m spacy download en_core_web_sm

Recommendation: Use min (default) for general cases as it selects the most relevant form. Use mean or harmonic when you want to consider all variations equally.

Usage (Python)

How to use it using Python:

import yake
 
text = """Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning 
competitions. Details about the transaction remain somewhat vague, but given that Google is hosting its Cloud 
Next conference in San Francisco this week, the official announcement could come as early as tomorrow. 
Reached by phone, Kaggle co-founder CEO Anthony Goldbloom declined to deny that the acquisition is happening. 
Google itself declined 'to comment on rumors'. Kaggle, which has about half a million data scientists on its platform, 
was founded by Goldbloom  and Ben Hamner in 2010. 
The service got an early start and even though it has a few competitors like DrivenData, TopCoder and HackerRank, 
it has managed to stay well ahead of them by focusing on its specific niche. 
The service is basically the de facto home for running data science and machine learning competitions. 
With Kaggle, Google is buying one of the largest and most active communities for data scientists - and with that, 
it will get increased mindshare in this community, too (though it already has plenty of that thanks to Tensorflow 
and other projects). Kaggle has a bit of a history with Google, too, but that's pretty recent. Earlier this month, 
Google and Kaggle teamed up to host a $100,000 machine learning competition around classifying YouTube videos. 
That competition had some deep integrations with the Google Cloud Platform, too. Our understanding is that Google 
will keep the service running - likely under its current name. While the acquisition is probably more about 
Kaggle's community than technology, Kaggle did build some interesting tools for hosting its competition 
and 'kernels', too. On Kaggle, kernels are basically the source code for analyzing data sets and developers can 
share this code on the platform (the company previously called them 'scripts'). 
Like similar competition-centric sites, Kaggle also runs a job board, too. It's unclear what Google will do with 
that part of the service. According to Crunchbase, Kaggle raised $12.5 million (though PitchBook says it's $12.75) 
since its   launch in 2010. Investors in Kaggle include Index Ventures, SV Angel, Max Levchin, Naval Ravikant,
Google chief economist Hal Varian, Khosla Ventures and Yuri Milner """

Simple usage using default parameters

kw_extractor = yake.KeywordExtractor()
keywords = kw_extractor.extract_keywords(text)
 
for kw in keywords:
    print(kw)

Specifying custom parameters

language = "en"
max_ngram_size = 3
deduplication_thresold = 0.9
deduplication_algo = 'seqm'
windowSize = 1
numOfKeywords = 20
 
kw_extractor = yake.KeywordExtractor(lan=language, 
                                     n=max_ngram_size, 
                                     dedupLim=deduplication_thresold, 
                                     dedupFunc=deduplication_algo, 
                                     windowsSize=windowSize, 
                                     top=numOfKeywords)
                                            
keywords = kw_extractor.extract_keywords(text)
 
for kw in keywords:
    print(kw)

Output

The lower the score, the more relevant the keyword is.

('google', 0.026580863364597897)
('kaggle', 0.0289005976239829)
('ceo anthony goldbloom', 0.029946071606210194)
('san francisco', 0.048810837074825336)
('anthony goldbloom declined', 0.06176910090701819)
('google cloud platform', 0.06261974476422487)
('co-founder ceo anthony', 0.07357749587020043)
('acquiring kaggle', 0.08723571551039863)
('ceo anthony', 0.08915156857226395)
('anthony goldbloom', 0.09123482372372106)
('machine learning', 0.09147989238151344)
('kaggle co-founder ceo', 0.093805063905847)
('data', 0.097574333771058)
('google cloud', 0.10260128641464673)
('machine learning competitions', 0.10773000650607861)
('francisco this week', 0.11519915079240485)
('platform', 0.1183512305596321)
('conference in san', 0.12392066376108138)
('service', 0.12546743261462942)
('goldbloom', 0.14611408778815776)

Back to top

Copyright ©2018-2026 INESC TEC. Distributed by an INESCTEC license.