FAQs - Person Search API
Have a Question You Want Answered? Ask Us!
Head over to the Help Center and search for your question. If you still can't find what you're looking for, create a support ticket and we will get it answered for you!
What is the difference between the various Person APIs?
See this section for a detailed breakdown of the differences between our Person APIs.
Can I get a likelihood score in the Person Search API?
The likelihood score that we return in our Enrichment APIs represents the likelihood of the matching logic given the input parameters, so it wouldn't be relevant in the Person Search API.
Can I use the Person Search API to enrich data?
The Person Search API ignores the nuances in our Person Enrichment API matching logic. We don't do any preprocessing (cleaning) of the inputs to the Person Search API, so you're almost guaranteed to have a lower match rate.
What's more, Person Search API queries and Person Enrichment API queries are structured differently. When you make a Person Enrichment API request, we have a custom-built query that takes the different input parameters and weights them differently, only returning a match in certain cases while using stack ranking in cases where there are multiple matches (for example, John Smith in San Francisco.)
Should I use SQL or Elasticsearch for the Person Search API?
Use Elasticsearch when:
- You have complicated boolean queries.
- You want to maximize the control that you have over text-based matching (titles, summaries and so forth.)
- You are comfortable writing Elasticsearch queries.
Use SQL when:
- You are running simple searches with only a few parameters.
- You're exclusively using ENUM parameters from our data (location, company, major and so forth.)
- You are comfortable writing SQL queries.
Can I use wildcards in my search query?
Yes, wildcard terms are supported. However, we have a hard limit of 20 wildcards per query. See the relevant sections for more information about the limitations in Elasticsearch and SQL search queries.
Why is there a 1MB limit on the API requests and the response body?
The error message that you are receiving has occurred because the query is too large and cURL can't handle the response. Calling the API with Python should alleviate this issue as Python compresses the extra space within an Elasticsearch query. Additional ways to decrease the query size are to reduce the profile count parameter from 100
to 60
for each call and to remove the pretty
tag. You can get around this limitation by Using POST Requests.
Can I exclude PDL IDs in a Person Search call to avoid spending duplicate credits?
You can technically exclude up to 1,000 PDL IDs in a single search query, however you will eventually run into limitations due to our infrastructure. While the PDL ID is mostly persistent, it's possible that the IDs can get merged, deleted, or opted out from our dataset across releases. Additionally, we will truncate extremely long queries in our internal logs, making it more difficult to assist you should you need technical support. At some point in the future, we will place limitations on query length and/or performance time to avoid these costly queries.
Will I be charged for retrieving the same profile multiple times?
Yes. PDL will charge for the retrieval of the same profile.
Example 1
You search for Engineers in California then conduct another search for Engineers in San Francisco. You will be charged for the overlap in the second request.
Example 2
You search for Engineers in California and search again for Engineers in California. You will be charged the same amount for each request.
Example 1 is a bit more difficult to handle for programmatically and likely needs a UI/UX approach to limit overlapping queries.
To handle for situations demonstrated in Example 2, PDL recommends duplicate call detection. You can use methods like unique key constraints. For example, you can hash the request, storing the hash, and checking for a duplicate call before sending it to the API to avoid the second situation above.
Updated 6 days ago