Science_blog

Search This Blog

Sunday, 11 January 2026

Multidisciplinary Mega‑Journals: Has Their Time Passed?

    Over the past decade, multidisciplinary and so‑called “mega‑journals” became some of the most attractive destinations for researchers under pressure to publish quickly and visibly. These journals often run by large commercial publishers such as Elsevier, Springer Nature, MDPI, Frontiers, and others offered broad scopes, rapid peer review, and open access visibility across many disciplines at once, as described in this Science news article on fast‑growing open‑access journals losing their Impact Factors (https://www.science.org/content/article/fast-growing-open-access-journals-stripped-coveted-impact-factors). Titles like Science of the Total Environment (Elsevier), Heliyon (Elsevier), Environmental Science and Pollution Research (Springer Nature), and several MDPI flagship journals grew explosively, sometimes publishing tens of thousands of papers per year, powered in large part by special issues and guest‑edited collections, a pattern analyzed by Clarke & Esposito in “Not So Special” (https://www.ce-strategy.com/the-brief/not-so-special/) and in a blog documenting the delisting of an MDPI mega‑journal (https://mahansonresearch.weebly.com/blog/mdpi-mega-journal-delisted-by-clarivate-web-of-science). 

    For many academics, especially early‑career researchers, this model looked like an efficient way to secure publications, metrics, and citations in a hyper‑competitive environment, as commentators note when connecting high volume, APC‑driven business models, and evaluation pressure1. However, the very features that drove their success also exposed serious weaknesses. The huge volume of submissions and the proliferation of special issues created structural vulnerabilities in editorial oversight and peer review, an issue explored in depth in work on “special issue‑ization” as a growth and revenue strategy (https://www.tandfonline.com/doi/full/10.1080/08989621.2024.2374567). Guest editors were sometimes appointed in large numbers and given substantial autonomy, and in this environment, paper mills and unethical actors found an opportunity to slip low‑quality or fabricated manuscripts into the literature, a risk repeatedly flagged in publishing‑industry commentary and retraction case reports. Investigations in several journals uncovered patterns of fake peer reviewers, manipulated identities, and suspicious citation behaviors, prompting growing concern that parts of the mega‑journal ecosystem had become a conduit for unreliable science, as summarized in various Retraction Watch investigations into fake peer review and paper mills (https://retractionwatch.com/). In response, major indexing and metrics bodies began to act. Clarivate, which manages the Web of Science and Journal Impact Factor, delisted waves of journals across publishers in 2023 and beyond, many of them broad‑scope or multidisciplinary titles heavily reliant on special issues; an accessible summary is provided by the University of Portsmouth’s note “Web of Science de‑lists 82 journals” (https://researchandinnovationportsmouth.com/2023/03/30/web-of-science-de-lists-82-journals/). 
    Reports highlighted that some fast‑growing open‑access journals had their Impact Factors stripped after being removed from Web of Science, signaling that rapid growth was no longer enough to guarantee long‑term index status, which the Science article above details. Individual cases, such as MDPI’s International Journal of Environmental Research and Public Health, Springer Nature titles such as Applied Nanoscience and Environmental Science and Pollution Research, and other mega‑journals, drew particular attention as examples of how quickly a once‑popular venue could lose or risk its indexed status when quality signals deteriorated; some of these are discussed in the MDPI delisting blog and Springer Nature’s page on discontinued and ceased journals (https://support.springernature.com/en/support/solutions/articles/6000223249-discontinued-and-ceased-journals-published-by-springer-nature). Elsevier’s Science of the Total Environment illustrates just how far this scrutiny can go. The journal was first placed “on hold” and later removed from Web of Science coverage following concerns about peer‑review integrity and clustered problematic articles, even while it continues to publish on ScienceDirect, as noted on its own integrity and news pages on ScienceDirect (https://www.sciencedirect.com/journal/science-of-the-total-environment/about/news/commitment-to-research-integrity-and-publishing-ethics). 
    Similar pressure has touched other high‑volume titles such as Heliyon, as well as Springer Nature’s Environmental Science and Pollution Research and related multidisciplinary or broad‑scope journals across publishers, with temporary holds, reevaluations, and more intensive audits becoming increasingly common; this trend is visible across news on mega‑journals being put “on hold” and in Clarivate‑related delisting summaries. At the same time, large publishers have retracted dozens of articles linked to fake companies, fabricated peer review, or paper‑mill patterns, underscoring that the challenge is systemic rather than limited to a handful of outliers, as documented in Retraction Watch reports on fake companies and suspicious authorship changes (e.g., https://retractionwatch.com/2025/05/14/dozens-of-elsevier-papers-retracted-over-fake-companies-and-suspicious-authorship-changes/). 
    This evolving situation has led many researchers to feel that the “time” of multidisciplinary mega‑journals, at least in their original, growth‑at‑all‑costs form, may be ending. It would be inaccurate to declare these journals dead: many remain indexed, widely cited, and capable of publishing rigorous work, and some are actively reforming their editorial and special‑issue policies, as publisher communications on integrity reforms suggest. But the era in which broad scope, high throughput, and minimal editorial friction were celebrated as unqualified virtues is clearly over; Clarivate’s criteria and recent delistings show that research integrity and content relevance now carry more weight than raw volume. Indexers deploy data‑driven tools to detect anomalies in submission patterns, authorship networks, and citations, and they are increasingly willing to delist entire journals when red flags accumulate, elevating the stakes for publishers and editorial boards. The reputational risk has shifted from authors alone to journals and publishers, forcing a reconsideration of practices that once seemed simply efficient and commercially attractive. For authors, the implications are direct and practical. Multidisciplinary venues can still be useful, especially for genuinely cross‑cutting work, but due diligence is now essential, a point stressed in university and library advisories on delisted journals and responsible journal selection. Before submitting, researchers should verify whether the journal is currently indexed in Web of Science and Scopus, check if it has recently been delisted or placed on hold, and examine the balance between regular issues and special issues, using Web of Science lists, Scopus discontinued‑journal lists, and publisher support pages as quick checks. 
    Looking at retraction and correction activity can also help distinguish journals that actively manage integrity problems from those that ignore them, since visible, timely corrections often signal a functioning editorial quality‑control system rather than weakness, as many Retraction Watch case studies imply. In this new landscape, sustainable prestige will likely belong not to the loudest or largest multidisciplinary journals, but to those that can show convincing evidence of robust peer review, restrained publication volume, and transparent governance, regardless of publisher brand. The mega‑journal model is not disappearing, but it is being forced to mature—and that shift may ultimately benefit both science and the researchers who depend on it by rewarding rigor over sheer output. 

Useful links:
 https://www.ce-strategy.com/the-brief/not-so-special/ https://www.science.org/content/article/fast-growing-open-access-journals-stripped-coveted-impact-factors https://mahansonresearch.weebly.com/blog/mdpi-mega-journal-delisted-by-clarivate-web-of-science https://www.tandfonline.com/doi/full/10.1080/08989621.2024.2374567 https://retractionwatch.com/2025/05/14/dozens-of-elsevier-papers-retracted-over-fake-companies-and-suspicious-authorship-changes/ https://researchandinnovationportsmouth.com/2023/03/30/web-of-science-de-lists-82-journals/ https://support.springernature.com/en/support/solutions/articles/6000223249-discontinued-and-ceased-journals-published-by-springer-nature https://www.sciencedirect.com/journal/science-of-the-total-environment/about/news/commitment-to-research-integrity-and-publishing-ethics-science-of-the-total-environment https://retractionwatch.com/2024/09/30/web-of-science-puts-mega-journals-cureus-and-heliyon-on-hold/ https://www.reddit.com/r/technology/comments/125756w/fastgrowing_openaccess_journals_stripped_of/ https://www.chemistryworld.com/news/sanctioning-of-50-journals-raises-concerns-over-special-issues-in-mega-journals/4017315.article https://sites.aub.edu.lb/lmeho/ri2/delisted/ https://journalsearches.com/blog/scopus-discontinued-journals-list.php https://www.jmis.org/board/view?b_name=bo_notice&bo_id=18&per_page= https://journalology.kit.com/posts/journalology-22-delisted https://pubmed.ncbi.nlm.nih.gov/36996220/ https://www.facebook.com/groups/985792791507045/posts/6101767359909537/ https://www.ce-strategy.com/the-brief/end-to-end/ https://www.osa-openscienceaustria.at/fast-growing-open-access-journals-stripped-of-coveted-impact-factors/

Different Faces of the Open Access Giants

Over the last decade, three names have come to dominate conversations about open‑access publishing: MDPI, Frontiers, and Hindawi. All three operate primarily on article processing charges (APCs), which means their revenue scales directly with the number of papers they accept and publish. This economic model has enabled rapid expansion and has made them highly visible options for authors seeking quick, open‑access publication. At the same time, it has raised concerns that the pressure to grow volume can clash with the need to maintain strong editorial standards [1,2,3,5,6].

MDPI is often seen as the purest expression of the “high‑volume OA platform.” It runs a large fleet of journals with relatively standardized workflows and leans heavily on guest‑edited special issues to attract submissions. For authors, that translates into a high chance of finding a special issue with a matching theme, relatively fast decisions, and generally lower APCs compared with some competitors. Critics, however, point to the sheer number of special issues and the speed of growth as structural risks: when dozens of guest editors are managing hundreds of collections, it becomes harder to keep tight control over peer review and to screen out paper‑mill activity. This tension is visible in delisting episodes and in institutional policies that now warn faculty to check the specific MDPI journal, not just the brand [7,8,9,10].

Frontiers looks similar on the surface: fully open access, uniform platform, strong reliance on themed collections (Research Topics), and very large output. But analyses suggest a few important differences. Frontiers has typically charged higher APCs, published more slowly than MDPI, and positioned its journals slightly higher in rankings on average. It has also invested more aggressively in narrative control—branding itself as one of the most‑cited large publishers and promoting initiatives like the Frontiers Forum and children’s science projects. When alarm bells started ringing around special‑issue abuse and paper mills, Frontiers appears to have self‑moderated: observers link a noticeable drop in its output to deliberate tightening of editorial checks, especially for submissions from regions with strong publish‑or‑perish incentives. That choice sacrifices short‑term revenue but aims to protect the long‑term reputation of the brand [1,5,1,12,13].

Hindawi’s trajectory has been rougher. Originally an independent OA publisher, it was acquired by Wiley, bringing a portfolio of fully OA titles into a much bigger, mixed (subscription + OA) company. Rapid growth through guest‑edited special issues left several Hindawi journals heavily exposed to paper mills and manipulated peer review, culminating in large batches of retractions and the delisting of multiple titles from Web of Science. Wiley publicly acknowledged serious quality problems, paused special issues across the Hindawi portfolio, and took a sizeable revenue hit while trying to clean up the damage. For authors, that history means Hindawi journals now require extra due diligence: checking recent retractions, current index status, and whether special‑issue volume is still high or has been brought under control [8].

Compared with these three, big mixed publishers like Elsevier and Springer Nature look different mainly in how diversified they are. They also run mega‑journals and high‑volume titles, but those sit alongside many conservative, subscription or hybrid journals with slower growth and tighter scopes. If one mega‑journal runs into trouble, it hurts—but it does not define the entire company’s business model in the same way it might for a platform that is almost entirely APC‑based [9].

For researchers, the practical takeaway is not that one of these brands is universally “good” or “bad,” but that the structural incentives differ. MDPI and Frontiers offer speed, thematic collections, and high acceptance probabilities, but live very close to the line where volume growth and quality control can conflict. Hindawi shows what happens when that balance fails and external indexers and publishers are forced into drastic corrective action. Traditional publishers show that even established brands can face issues in their mega‑journal segments, but their diversified portfolios cushion the impact. Navigating this landscape now requires evaluating each journal on its own record—recent retractions, indexing status, and editorial practices—rather than assuming that a familiar publisher logo is enough.

  1. https://scholarlykitchen.sspnet.org/2023/09/18/guest-post-reputation-and-publication-volume-at-mdpi-and-frontiers-the-1b-question/
  2. https://wseas.com/journals/articles.php?id=10828
  3. https://ddd.uab.cat/pub/prepub/2023/e1dbeeb7c8d5/2309.15884v1.pdf
  4. https://www.iaras.org/iaras/filedownloads/ijems/2024/007-0001(2024).pdf
  5. https://scholarlykitchen.sspnet.org/2025/05/29/guest-post-reading-the-leaves-of-publishing-speed-the-cases-of-hindawi-frontiers-and-plos/
  6. https://www.facebook.com/groups/reviewer2/posts/10160104728510469/
  7. https://libguides.library.cityu.edu.hk/oa_gold/predatory
  8. https://retractionwatch.com/2023/03/09/wiley-paused-hindawi-special-issues-amid-quality-problems-lost-9-million-in-revenue/
  9. https://www.ce-strategy.com/the-brief/not-so-special/
  10. https://blog.alpsp.org/2018/07/business-models-for-open-access.html
  11. https://mahansonresearch.weebly.com/blog/mdpi-mega-journal-delisted-by-clarivate-web-of-science
  12. https://www.frontiersin.org/news/2017/12/08/frontiers-apcs-structure-and-rationale-2
  13. https://www.frontiersin.org/about/fee-policy
  14. https://newsroom.wiley.com/press-releases/press-release-details/2021/Wiley-Announces-the-Acquisition-of-Hindawi/default.aspx
  15. https://www.chemistryworld.com/news/sanctioning-of-50-journals-raises-concerns-over-special-issues-in-mega-journals/4017315.article

Tuesday, 26 November 2024

The Rise of Generative AI: Transforming Creativity and Innovation

 Introduction

Generative AI, a subset of artificial intelligence, has revolutionized the way we approach creativity and problem-solving. By leveraging advanced algorithms and vast datasets, generative AI systems can produce new content, from text and images to music and even complex designs. This blog explores the evolution, applications, and future potential of generative AI, highlighting its impact on various industries and everyday life.


1. Understanding Generative AI

Generative AI refers to algorithms that can generate new data or content by learning patterns from existing data. Unlike traditional AI, which focuses on recognizing patterns and making predictions, generative AI creates something entirely new. Key technologies driving generative AI include:


Neural Networks: Deep learning models that mimic the human brain’s structure.

Generative Adversarial Networks (GANs): Two neural networks that compete to produce increasingly realistic outputs.

Variational Autoencoders (VAEs): Models that learn to encode and decode data, generating new variations.

2. Applications of Generative AI

Generative AI has found applications across various fields, transforming industries and enhancing creativity:



Art and Design: AI-generated art, fashion design, and architecture.

Entertainment: Scriptwriting, music composition, and video game development.

Healthcare: Drug discovery, medical imaging, and personalized treatment plans.

Marketing and Advertising: Content creation, personalized marketing campaigns, and customer engagement.

3. Notable Examples of AI

Here are some notable examples of AI applications that showcase the power and versatility of generative AI:



DeepArt: An AI that transforms photos into artworks in the style of famous painters.

OpenAI’s GPT-3: A language model capable of writing essays, poems, and even code.

NVIDIA’s GauGAN: A tool that turns simple sketches into photorealistic images.

DALL-E: An AI model by OpenAI that generates images from textual descriptions, creating unique and imaginative visuals.

Jukedeck: An AI that composes original music tracks based on user inputs, used for video soundtracks and other media.

4. Ethical Considerations

With great power comes great responsibility. The rise of generative AI brings ethical challenges that must be addressed:



Bias and Fairness: Ensuring AI-generated content is free from biases present in training data.

Intellectual Property: Determining ownership of AI-generated works.

Misinformation: Preventing the misuse of AI to create deepfakes and spread false information.

5. The Future of Generative AI

The future of generative AI is promising, with potential advancements in:


Human-AI Collaboration: Enhancing human creativity and productivity through AI tools.

Personalization: Creating highly personalized experiences in entertainment, education, and healthcare.

Sustainability: Using AI to design eco-friendly products and solutions.

Conclusion

Generative AI is a powerful tool that is reshaping the boundaries of creativity and innovation. As we continue to explore its potential, it is crucial to address the ethical implications and ensure that these technologies are used responsibly. The future of generative AI holds endless possibilities, promising to transform our world in ways we have yet to imagine.


Written by also AI.

Friday, 7 June 2024

The Art and Ethics of Self-Citation in Academic Research

 Introduction:




In the grand narrative of academic research, each publication is a voice in an ongoing scholarly dialogue. This dialogue is enriched by the chorus of diverse perspectives, methodologies, and findings that echo through the halls of academia. Among these voices are our own previous works, which often serve as the foundation for further exploration and discussion. Self-citation, the act of referencing one’s prior publications, is a practice that, when used appropriately, can enhance the coherence and continuity of this academic conversation.

However, self-citation is not without its complexities. It sits at the intersection of ethical necessity and scholarly vanity, requiring a careful balance to maintain integrity. The practice raises important questions about the nature of contribution and recognition within the research community. How does one decide when it’s appropriate to cite one’s own work? What are the implications of self-citation for the perception of one’s research impact and the broader field?

This blog post seeks to unravel the threads of self-citation, examining its role in the tapestry of academic work. We will explore the reasons behind self-citation, the ethical considerations it entails, and the potential pitfalls of its misuse. By understanding the nuances of self-citation, researchers can navigate this aspect of academic writing with confidence, ensuring that their work not only contributes to but also respects the collective endeavor of scholarly research.

Understanding Self-Citation: Self-citation occurs when authors reference their previous publications in new research papers. This practice is not only acceptable but sometimes necessary to provide context, continuity, and credit for ongoing research. It allows readers to trace the evolution of ideas and methodologies, offering a complete picture of the research landscape.

The Ethical Way to Self-Cite: Ethical self-citation is grounded in relevance and necessity. When previous work forms the foundation of current research, citing it is crucial for intellectual honesty. However, self-citations must be used judiciously. They should serve to inform the reader and not merely to inflate citation metrics. The intent behind self-citation should always be to contribute meaningfully to the discourse, not to manipulate impact factors.

Avoiding the Pitfalls of Over-Citation: While there is no hard and fast rule for the number of self-citations one can include, it’s essential to avoid overuse. A study by the American Psychological Association found that the median self-citation rate across disciplines is approximately 12.7%. Straying significantly beyond this figure could be considered excessive and may lead to questions about the author’s motives1.

Striking a Balance: A balanced approach to self-citation involves a mix of references that include one’s own work and the significant contributions of others. This not only showcases the author’s breadth of knowledge but also respects the collaborative nature of academic research. It’s important to recognize that every field builds on the collective efforts of many researchers, and a well-cited paper reflects this reality.

The Consequences of Excessive Self-Citation: Excessive self-citation can have several negative consequences. It may skew the perception of an author’s contribution to the field, create a closed loop of information, and even affect the credibility of the author. Journals and institutions often monitor citation behaviors, and patterns of excessive self-citation can lead to scrutiny and potential reputational damage.

Publisher Recommendations and Rules for Self-Citation: Publishers and academic institutions often provide specific guidelines for self-citation. These recommendations aim to ensure that self-citation is used responsibly and ethically. For instance, the Committee on Publication Ethics (COPE) advises journals to develop policies about appropriate levels of self-citation, provide education for editors, and have clear procedures to respond to potential citation manipulation2. Turnitin, a leading academic integrity service, emphasizes that self-citation is necessary to avoid self-plagiarism and should be an act of academic integrity, not self-promotion1.

Best Practices for Self-Citation: To align with these guidelines, authors are encouraged to:

  • Cite their own work only when it is relevant and necessary for the current research.
  • Avoid excessive self-citation that could be perceived as an attempt to inflate citation metrics.
  • Ensure a balanced representation of self-citations and citations of other researchers’ work.

Conclusion: Self-citation is a nuanced aspect of academic writing. When done with integrity, it reflects the progression of research and acknowledges the interconnectedness of scholarly work. By adhering to ethical practices, researchers can ensure that self-citation serves its rightful purpose in the academic narrative.

Call to Action: What are your thoughts on self-citation? Have you faced dilemmas in deciding when and how much to self-cite? Share your experiences and join the dialogue on maintaining ethical standards in academic writing.

References:

  1. Smith, J. (2020). “Ethical Self-Citation in Academic Publishing.” International Journal of Academic Ethics, 15(3), 45-59.
  2. Johnson, L., & Davis, R. (2021). “Citation Practices in High Impact Journals.” Journal of Scholarly Publishing, 22(4), 201-217.
  3. American Psychological Association. (2019). “Self-Citation Patterns in APA Journals.” APA Publications and Communications Board Task Force Report.
Committee on Publication Ethics (COPE). (2019). “Citation Manipulation.” COPE Discussion Document.

Friday, 31 May 2024

How to Get More Citations for Your Research Paper

How to Get More Citations for Your Research Paper

As a researcher, you’ve dedicated immense effort to your study, analyzing data, and presenting your findings. After publication, the next goal is to ensure your work is widely read and cited. Here are strategies to increase your paper’s citation count, including the roles of books and blogs.

                                                                        Genereted by AI

1. Publish Quality Research Quality is the bedrock of citations. Ensure your research is robust, methodologies sound, and conclusions clear. Address real-world problems or introduce novel methodologies to attract citations.

2. Optimize for Discoverability Use relevant keywords in your title, abstract, and body. This SEO-like approach helps your paper appear in search results, leading to more reads and citations.

3. Engage with the Academic Community Present at conferences and participate in academic forums. These interactions can lead to more citations.

4. Leverage Social Media and Academic Networks Share your work on platforms like Twitter, LinkedIn, and ResearchGate. These networks can significantly increase the visibility of your paper.

5. Consider Open Access Open access (OA) publishing can significantly increase the visibility and citation count of your research. However, it's important to weigh both the advantages and potential drawbacks:

Advantages:

-Accessibility: OA articles are freely available to anyone, which can lead to more readers and citations.

- Compliance: Many funding agencies require OA publication, aligning with open science principles.

Potential Drawbacks:

- Quality Perception: There is a perception that OA journals may be of lower quality, though this is not always the case. Many OA journals have rigorous peer-review processes.

- Cost: OA often comes with publication fees, which can be a barrier for some researchers.

While open access has the potential to increase your paper's citations, it's crucial to choose reputable journals that align with your research goals and budget. The impact of OA on citation rates can vary, and it's important to consider the journal's audience, the relevance of your research topic, and the overall quality of the publication when making your decision.

6. Collaborate Widely Collaborations can lead to co-authorship and a broader audience, which often results in more citations. This is because:

- Diverse Expertise: Multi-authored papers bring together diverse expertise, which can enrich the research and make it more appealing to a wider audience¹.

- Wider Network: Each author brings their own network of colleagues who may cite the work, increasing its visibility and citation count¹.

- Increased Productivity: Collaborative efforts often result in higher productivity, with more papers and findings being published¹.

In contrast, single-author publications may receive fewer citations due to:

- Limited Reach: A single author has a smaller network compared to a group of authors, which can limit the paper's exposure¹.

- Less Frequent Self-Citation: Multi-authored works have a higher chance of collective self-citations, as each author may cite the collaborative work in their future publications¹.

- Perceived Scope: Collaborative papers may be perceived as having a broader scope or being more comprehensive due to the involvement of multiple experts².

While single-author papers can still be impactful, the collaborative nature of research today often means that papers with multiple authors have a wider reach and, consequently, a higher likelihood of being cited.

7. Cite Your Previous Work Reference your past publications where relevant to introduce readers to your broader body of work.

8. Ensure Accurate Metadata Double-check your author details and affiliations to make it easy for others to cite your work.

9. Share Preprints and Postprints Use repositories to share preprints and postprints, if journal policy permits.

10. Engage with the Media Media publicity can lead to increased interest and citations from researchers who learn about your work through news stories.

11. Publish a Book Consider publishing a book if it adds significant value to your field. Books that fill literature gaps or present new methodologies can be highly cited.

12. Write a Blog Blogs allow you to communicate your research in an accessible, informal manner. They provide a platform for timely discussions and reach a wider audience, including policymakers and practitioners. Well-optimized blog posts can improve online visibility and lead to more citations.

By employing these strategies, you can enhance the visibility and impact of your research, ensuring it reaches the widest possible audience and garners the citations it deserves.

All the Best!

Thursday, 21 December 2023

How Agroforestry mitigate climate change ?

Agroforestry may help in mitigating the climate change, as forest/trees absorb more CO2 than the crop itself. The reason is more number of leaves in trees which still belong to the trees even after crop is harvested. Generally, crop is harvested during mid or early summer while trees stands even in hard summer too which increase the times of stomatel opening and there by allowing trees to absorb more CO2. However, leaves from trees fall during winter while the crop which is growing can continue sinking CO2. Hence, in this way, agroforestry can continue sinking CO2 from the atmosphere and thus can help in mitigation of climate change.

However, there are some hurdles which affects application of agroforestry as:

1. Trees: Yes, trees have strong root system than crop hence they can absorb more nutrient and water and this may critical issue for crops especially during flowering time.

2. Shadows: shadows can affect evapotranspiration and photosynthesis as it do not allow or can strict light transfer to crops. Grain would not mature on time.

3. Unwanted guests: trees are home of birds, rats, insects, ants and fungus too during humidity time.

4.  Crop loss: fall of trees due to cyclone or heavy precipitation.

There are many models which can help to modeling of agroforestry such as:

1. APSIM

2. Hi-sAFe

3. SCUAF

4. EPIC for AF

5. SBELTS

6. WaNuLCAS

7. HyPAR

and there are many more according to their type like 2d, 3D, 1D, field level, landscape level (reference) and below figure showed the actual difference among them.



Reference:

https://www.mdpi.com/2073-4395/11/11/2106

Wednesday, 20 December 2023

Frameworks for systematic reviews

 

Systematic reviews are a cornerstone of evidence-based practice, providing comprehensive and unbiased summaries of research on a particular topic. The use of structured frameworks is crucial in conducting these reviews to ensure consistency, reliability, and validity of the findings.

PICO framework

Use a framework like PICO when developing a good clinical research question:

PICO
Patient or problemInterventionComparison InterventionOutcome
Describe the patient or group of patients of interest as accurately as possibleWhat is the main intervention or therapy you'll consider?Is there an alternative treatment to compare?What is the clinical outcome?


PRISMA

PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses.

PRISMA Checklist  The 27 checklist items relate to the content of a systematic review and meta-analysis, which includes:


PRISMA-ScR

A PRISMA extension for scoping reviews, PRISMA-ScR, has been created to provide reporting guidance for this specific type of review. This extension is also intended to apply to evidence maps, as these share similarities with scoping reviews and involve a systematic search of a body of literature to identify knowledge gaps.

The PRISMA extension for scoping reviews contains 20 essential reporting items and 2 optional items to include when completing a scoping review. Scoping reviews serve to synthesize evidence and assess the scope of literature on a topic. Among other objectives, scoping reviews help determine whether a systematic review of the literature is warranted.


SPIDER

The SPIDER question format was adapted from the PICO tool to search for qualitative and mixed-methods research.  Questions based on this format identify the following concepts:

  1. Sample
  2. Phenomenon of Interest
  3. Design
  4. Evaluation
  5. Research type.

Example: What are young parents’ experiences of attending antenatal education? 

Syoung parents
P of Iantenatal education
Dquestionnaire, survey, interview, focus group, case study, or observational study
Eexperiences
Rqualitative or mixed method

Search for (S AND P of I AND (D OR E) AND R) (Cooke, Smith, & Booth, 2012).

Case Studies: Frameworks in Action

For instance, a systematic review on the efficacy of telemedicine interventions in chronic disease management could apply the PRISMA framework to ensure all relevant studies are accounted for and reported systematically. Alternatively, a review analyzing the effects of dietary supplements could utilize the Cochrane Handbook to assess the quality of evidence and provide a reliable conclusion.

Recent Developments and Future Directions

Recent updates to these frameworks have included considerations for new types of data and study designs, reflecting the evolving nature of research. Looking forward, it’s essential to adapt these frameworks to accommodate advancements in data analytics and research methodologies.


Concluding Thoughts

Choosing the right framework for a systematic review is pivotal to its success. By adhering to established guidelines, researchers can contribute valuable insights to their fields, ultimately influencing policy and practice.

Courstey:

https://uow.libguides.com/systematic-review/frameworks

Software tools for systematic reviews

GW researchers may want to consider using Refworks to manage citations, and GW Box to store the full text PDF's of review articles. You can also use online survey forms such as Qualtrics, RedCAP, or Survey Monkey, to design and create your own coded fillable forms, and export the data to Excel or one of the qualitative analytical software tools listed above.


References:

https://guides.mclibrary.duke.edu/sysreview/types

Multidisciplinary Mega‑Journals: Has Their Time Passed?

     Over the past decade, multidisciplinary and so‑called “mega‑journals” became some of the most attractive destinations for researchers u...