At least 2 independent reviewers must read and separately make a decision on each abstract. In some cases, if a larger team is available, the results can be divided up between pairs of reviewers. Using a software/website specific for this purpose is helpful. Some are freely available online; one of particular note is Covidence.
Each reviewer must individually, without influence of another, decide if each abstract should be included (Yes), excluded (NO), or needs further examination of full-text or discussion (Maybe).
The decisions of the paired reviewers should be compared when completed.
Pull the Full-Text of all "Yes" and "Maybe" category articles.
Again, two independent reviewers should read them and make separate inclusion/exclusion decisions. Reiteratively, those decisions are recorded, compared, and disagreements resolved.
At this point, there should be agreement on the full sets of included and excluded articles. In some cases, an additional expert or team member may be needed to resolve any serious disagreements.
Counts and measures of inter-rater reliability should be kept for both stages.