FAQ’s Search

How can we help?
   
   
Search Ontology Profile Turtorial
User Manual

Search

How does ProfileMap extract entities from the given search request?

Entities are detected by looking for the name and synonyms of the entity within the provided text. If multiple entities have the same name or synonym all of them will appear in the list of returned entities.

0 0 0

How do I remove an entity from the search criteria, how do I add an additional one?

Entities can be removed by hovering over them with the mouse and clicking on the garbage bin icon that appears next to the entity level.

Entities can be added using the respective search bars. These searches offer suggestions while the user types into the text field by comparing the inserted text with the names and synonyms of the respective entities. An entity is added to the search criteria by clicking on it.

0 0 0

What does it mean to search for the BSU or All competencies?

For competencies, the search is by default restricted to the business unit (BSU) of the user. Depending on how this is set up at your company there could be one business unit for the whole company or different ones for different groups within the company. Each business unit can define its own taxonomy. Searching for BSU competencies means that only competencies defined in the business unit taxonomy will be suggested. This can be changed by selecting the “All” radio button, which will result in all non-deactivated competencies in the ontology being considered for suggestion.

0 0 0

How does the search behave if I request different levels for competencies or languages?

The required competency and language levels can be set to a value between 1 and 4. Only if a profile contains at least one requested entity at the required level or higher or if it contains a competency that makes it very likely that the candidate has the requested competency at the required level even though it is not explicitly mentioned in the profile (e.g. knowing a library of a programming language indicates that the candidate knows the programming language), the profile will appear in the result list. The language and competency levels can also affect whether candidates appear higher or lower in the result list.

0 0 0

How do I set an entity to be mandatory and how does this influence the search?

Entities can be made mandatory by hovering over their name with the mouse and selecting the checkbox.

If one or more entities are made mandatory only candidates that have all the mandatory entities with at least the required level in their profile will be in the result list. Indirectly matching a competency is not enough in this case.

0 0 0

Which profiles end up in the search result list?

If one or more entities are made mandatory only candidates that have all the mandatory entities with at least the required level in their profile will be in the result list. Indirectly matching a competency is not enough in this case.

If no entities are made mandatory, only if a profile contains at least one requested entity at the required level or higher or if it contains a competency that makes it very likely that the candidate has the requested competency at the required level even though it is not explicitly mentioned in the profile (e.g. knowing a library of a programming language indicates that the candidate knows the programming language), the profile will appear in the result list.

On the search results page a sorted list of candidates is displayed. The candidates are ordered by how good the system thinks a candidate’s profile fits the entered search request. For each request, at maximum 100 candidates will be displayed.

0 0 0

How is the order in which the profiles are displayed determined?

To come up with this assessment, for each candidate the system calculates how similar the candidate is with respect to different aspects of the request. For example, one aspect used in the search is checking how similar a candidate’s profile is to the search request with respect to academic disciplines. An academic discipline is a field of study or profession like “Marketing”, “Software Architecture” or “Machine Learning”.

The system uses different measures to determine the similarity of an aspect, but typically, the request and the profile are considered to be more similar if the overlap between the entities is larger and if the entities that are not directly matching are closer related to other entities in the candidate’s profile.

The different aspect similarities are then given to a machine learning model that provides the overall ranking score based on them. The model has been trained on feedbacks by users rating how good a candidate’s profile fits the respective search request. From this, the model learns how important which aspects and which combinations of aspects are for different kinds of search requests. It tries to predict how many stars an expert would give the fit between request and profile. The results are ordered by how high the prediction of the machine learning model for the corresponding profile is.

0 0 0

What do the percentage values mean?

The large percentage value on the entries of the search result list and at the top of the side-by-side comparison is the overall score. A machine learning model has been trained on feedbacks by users rating how good a candidate’s profile fits the respective search request. It tries to predict how many stars an expert would give the fit between request and profile. The prediction is then scaled up to 0-100%.

In addition to this there are scores for competencies, projects, and certificates displayed on the entries of the search result list and the side-by-side comparison as well as for languages only on the side-by-side comparison. The language score corresponds to the fraction of requested languages that are in the profile of the candidate. So, if three languages are requested but the candidate only has one of these in his or her profile, then the score will be 33%. Likewise, the certificate score is the fraction of the requested certificates that are in the profile of the candidate and the project score provides the fraction of competencies and languages that are in at least one of the candidate’s projects.

For competencies, the machine learning model takes different indirect ontology relationships of these competencies into consideration. To account for this, the competency score is not calculated as a fraction of matching competencies but rather linked to the overall score calculated by the complex machine learning model and corrected for the results of the other three scores. This way the overall score can be seen as a combination of competency, language, certificates, and project scores, with the competency score accounting for the largest part of the overall score.

0 0 0

Why are the overall and the competency score not at 100% even though all search criteria are met?

In contrast to the other scores, the overall and the competency score depend on a complex machine learning model. Since this model tends to be cautious to predict very large and very low values and since the amount of considered information goes beyond what is displayed in the side-by-side comparison, both the overall score and the competency score will typically not be around 100% when all shown criteria are met and not be around 0% when hardly any are.

0 0 0

How do you interpret the side-by-side comparison?

The side-by-side comparison screen is split into two sides. On the left side, the request text and the requested competencies, languages and certificates are shown. On the right side, the corresponding relevant parts of the candidate’s profile are displayed. Here, requested entities that are missing from the candidate’s profile are displayed in red and crossed out. Requested entities that are also in the candidate’s profile are displayed in black. Related competencies that support that a candidate possesses a certain competency are displayed indented and in grey. Relevant projects are displayed on the right with directly or indirectly matching tags being highlighted in green.

0 0 0

When does ProfileMap consider a project to be relevant in the side-by-side comparison?

Projects are tagged with competencies and languages. If at least one of the tags matches one of the requested entities directly or indirectly, the project is considered relevant and displayed.

0 0 0

How can I provide feedback concerning the quality of the search results?

On the search results page, if the search is saved, the user has the possibility to provide feedback to the system how well a candidate’s profile fits the search request by selecting one to five stars for each candidate. Giving a candidate’s profile five stars means that his or her profile fit the search request very well. Giving a candidate’s profile one star means that the profile doesn’t fit the search request at all. This information can be used to further train the machine learning model that orders the search results.

0 0 0

How can I save a search?

A search can be saved by clicking on the „Save search“ button on top of the search results page. In this case, the search parameters and the search results will be saved together with additional metadata that can be set by filling out the form that appears when clicking on „Request data“.

0 0 0

How can I load a search?

“Search” in the navigational bar. The metadata field “Request title” is listed as “Name” in the search history. When clicking on the entry in the table, the saved search is loaded including previously set metadata. The search results are stored with the search making loading a search significantly quicker than executing a new one and ensuring the same results as when the search was originally executed.

0 0 0

How can I edit a saved search?

If the search parameters of a loaded search are changed, the search must first be executed before it is possible to save the adapted search. Saving an adapted loaded search will always lead to overwriting the original search. If a new search should be saved, the “New Search” button must be used to enter a new search request.

0 0 0

<< 1 >>