最新消息: USBMI致力于为网友们分享Windows、安卓、IOS等主流手机系统相关的资讯以及评测、同时提供相关教程、应用、软件下载等服务。

基于ChatGPT和GoogleScholar的文章总结器

业界 admin 36浏览 0评论

在当今信息爆炸的时代,科研人员每天都会面对大量的文献资料。为了更高效地筛选和理解这些资料,我们开发了一款基于ChatGPT和Google Scholar的文章摘要工具。它能够自动抓取Google Scholar上的研究文章,并利用OpenAI的GPT模型进行摘要生成,同时支持多语言输出,帮助打破语言障碍,加速科研进程。

项目介绍

本项目的目标是通过以下两方面提升科研效率:

  1. 跨语言阅读:通过多语言摘要功能,帮助非英语母语的研究人员更好地理解英文文献。
  2. 提高效率:通过自动化工具,减少人工筛选文献和撰写摘要的时间。

功能亮点

  • 抓取Google Scholar上的研究文章 📚
  • 利用OpenAI的GPT模型进行摘要生成 🤖
  • 支持用户自定义摘要内容 🛠️
  • 将生成的摘要和文章链接保存为CSV和文本文件 💾
  • 支持多种语言的摘要输出 🌐
  • 这里做出了一个Web App, 不知最后能不能上线,可以预览一下这个程序的功能:

1. 依赖工具

在开始之前,请确保您的系统安装了以下工具:

  • Python 3.7 及以上版本 🐍
  • OpenAI API Key 🔑

2. 安装依赖包

pip install openai==0.28
pip install scholarly
pip install python-dotenv

3. 项目结构

src                       #分段的项目代码
├── main.py               #主文件 (GitHub 里面是分开的,我这里全部和一块)
├── env.dotenv            # 环境文件,更改来设置ChatGPT版本和你的api key
└── requirements.txt      # 需要的包文件,pip install -r requirements.txt 来下载需要的包

4. 核心代码

函数 get_filtered_articles(topic, start_year, end_year, max_results) 。topic是研究相关方向,start_year为开始年份,end_year为结束年份(用于筛选),max_results是最大返回的文章数,根据需要调整。利用 scholarly 来从谷歌学术上爬取文章,注意,不能使用的太频繁,可能导致API被谷歌学术暂时封禁,需要等8小时左右才能继续使用。

from scholarly import scholarly

def get_filtered_articles(topic, start_year, end_year, max_results):
   """
   从 Google Scholar 获取与某个主题相关的文章并按年份进行过滤。
   变量 max_results:要获取的最大文章数。
   变量 topic:要搜索的研究主题。
   变量 start_year:过滤文章的起始年份。
   变量 end_year:过滤文章的结束年份。
   """
    search_query = scholarly.search_pubs(topic)
    articles = []

    for article in search_query:
        # 查看文章的出版年份
        pub_year_str = article['bib'].get('pub_year', '0')
        pub_year = int(pub_year_str) if pub_year_str.isdigit() else 0

        # 显示进度
        print(f"Progress: {len(articles)}/{max_results}")
        print(f"Found article: {article['bib']['title']} ({pub_year})")

        # 如果达到所需的文章数量则停止
        if len(articles) >= max_results:
            break

        # 过滤目标文章
        if start_year <= pub_year <= end_year:
            abstract = article['bib'].get('abstract', 'No abstract available')
            articles.append({
                'title': article['bib']['title'],
                'abstract': abstract,
                'year': pub_year,
                'url': article.get('pub_url', 'No URL available')
            })

    return articles

函数 analyze_with_chatgpt(abstract, attributes=None, language='English')。attributes是你可以自定义你的表格样式例如【年份,方向,标题,临床运用,方法,结果】, language为总结语言。这个prompt  运行很好,你也可以自定义。

举例的env.dotenv 文件,创建一个空文件,命名和后缀为env.dotenv。

# OpenAI API Key
OPENAI_API_KEY=your-open-ai-api-key-here

# GPT Model Version
GPT_VERSION=gpt-3.5-turbo
import openai
import os
from dotenv import load_dotenv  # Import dotenv to load environment variables

# 从 .env 文件加载环境变量
load_dotenv(dotenv_path='../env.dotenv')

# 从环境变量加载 OpenAI API 密钥
openai.api_key = os.getenv('OPENAI_API_KEY')

def analyze_with_chatgpt(abstract, attributes=None, language='English'):
   """
   使用 OpenAI 的 GPT 模型分析摘要并返回汇总表格式。

   :param abstract:研究论文的摘要。
   :param attribute:表格格式的属性列表。默认值为 ["年份", "研究领域", "标题", "临床应用", "方 
   法论", "结果"]。
   :param language:汇总表的语言。默认值为“英语”。
   :return:字符串形式的汇总表格式。
   """
    # 如果没有提供,则设置默认属性
    if attributes is None:
        attributes = ["Year", "Research Field", "Title", "Clinical Application", "Methodology", "Outcomes"]

    # 根据属性创建字符串以进行快速自定义
    attributes_str = ", ".join(attributes)

    # 准备 ChatGPT 的promote
    prompt = (
        f"Please summarize the following research abstract in table format with attributes "
        f"[{attributes_str}]. Keep the answer brief and in the same format every time. "
        f"Respond in {language}:\n{abstract}"
    )

    # Make the API call
    response = openai.ChatCompletion.create(
        model=os.getenv('OPENAI_MODEL', 'gpt-3.5-turbo'),
        messages=[
            {
                "role": "user",
                "content": prompt
            }
        ]
    )
    return response.choices[0].message['content']

 函数保存结果到csv 和 txt 文件 (注意!ChatGPT  生成的表格有横线,一般需要把他parse掉,有时生成,有时不生成,所以导致结果里面会有几条格式对不上,但只是占少数)

import csv

def save2Csv(table_data_list, csv_file):
   """
   将表格字符串列表保存到 CSV 文件。

   变量 table_data_list:格式与 Markdown 表格类似的表格字符串列表。
   变量 csv_file:输出 CSV 文件的名称(默认为“example_output.csv”)。
   """
    with open(csv_file, mode='w', newline='', encoding='utf-8') as file:
        writer = csv.writer(file)

        # 循环遍历列表中的每个表
        for table_data in table_data_list:
            # Parse the table into rows of data
            lines = table_data.strip().split("\n")

            # 提取标题和数据
            headers = [header.strip() for header in lines[0].strip("|").split("|")]
            rows = []
            for line in lines[2:]:  # Skip the header and separator lines
                row = [cell.strip() for cell in line.strip("|").split("|")]
                rows.append(row)

            # 仅为第一个表写入标题
            if file.tell() == 0:
                writer.writerow(headers)

            # 写入行
            writer.writerows(rows)

    print(f"Data successfully saved to {csv_file}")

def save2Textt(table_data_list, text_file="example_output.txt"):
   """
   将代表表行的字典列表保存到文本文件。

   变量 table_data_list:字典列表,其中每个字典代表表中的一行。
   变量 text_file:输出文本文件的名称(默认为“example_output.txt”)。
   """
    with open(text_file, mode='w', encoding='utf-8') as file:
        # Write headers based on the first dictionary's keys
        headers = list(table_data_list[0].keys())
        header_line = "| " + " | ".join(headers) + " |"
        separator_line = "| " + " | ".join(["-" * len(header) for header in headers]) + " |"

        # Write headers and separator to the text file
        file.write(header_line + "\n")
        file.write(separator_line + "\n")

        # Loop through each dictionary in the list and write its values as a table row
        for row_dict in table_data_list:
            row_line = "| " + " | ".join(str(row_dict[key]) for key in headers) + " |"
            file.write(row_line + "\n")

        file.write("\n")

    print(f"Data successfully saved to {text_file}")

现在就是运行代码的主函数了🙌 

from scholarly_search import get_filtered_articles
from gpt_analysis import analyze_with_chatgpt
from data_saver import save2Csv
from data_saver import save2Textt


def main():
    results = []
    # 用户定义的属性和语言
    user_defined_attributes = ["Year", "Research Field", "Title", "Clinical Application", "Methodology", "Outcomes"]
    summary_language = "English"  # You can change to other languages like "Spanish", "French", "Chinese", etc.

    # The basic paras
  """
   从 Google Scholar 获取与某个主题相关的文章并按年份进行过滤。
   变量 max_results:要获取的最大文章数。
   变量 topic:要搜索的研究主题。
   变量 start_year:过滤文章的起始年份。
   变量 end_year:过滤文章的结束年份。
"""
    max_results = 10  # Adjust based on needs
    topic = "computer assisted rehabilitation environment"
    start_year = 2019
    end_year = 2024

    articles = get_filtered_articles(topic, start_year, end_year, max_results)

    if not articles:
        print("No articles found in the specified date range.")
        return

    for article in articles:
        print(f"Title: {article['title']}")
        print(f"Year: {article['year']}")
        print(f"Abstract: {article['abstract']}")
        parsed = analyze_with_chatgpt(article['abstract'], user_defined_attributes, summary_language)
        print(parsed)
        results.append({
            'summary': parsed,
            'url': article['url']
        })
        print(f"URL: {article['url']}")
        print("\n" + "=" * 80 + "\n")

    save2Csv([result['summary'] for result in results], "research_articles.csv")
    save2Textt(results, "research_articles.txt")

if __name__ == "__main__":
    main()

5. 将其转为web app

只是一个简单的运用,我用Flask 做的,如果您需要您可以前往我的GitHub 页面。这里展示一下HTML的设计和理念吧,把全部的上传上来太多字数了。HTML的目的是让没有编程基础的人也可以用这个功能,主要也就是可视化上面提及的变量。CSS以下代码没有呈现,需要可以自己手搓一个。 

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Research Summary Application</title>
</head>
<body>
    <div class="container">
        <h1 id="app-title">Research Summary Application</h1>
        <div class="language-selector">
            <button id="btn-en">English</button>
            <button id="btn-zh">中文</button>
        </div>
        <form method="POST">
            <label for="topic" id="topic-label">Research Topic: (Please input in English)</label>
            <input type="text" id="topic" name="topic" required placeholder="Enter the research topic">

            <label for="start_year" id="start-year-label">Start Year:</label>
            <input type="text" id="start_year" name="start_year" class="year-picker" placeholder="YYYY" required>

            <label for="end_year" id="end-year-label">End Year:</label>
            <input type="text" id="end_year" name="end_year" class="year-picker" placeholder="YYYY" required>

            <label for="max_results" id="max-results-label">Max Results:</label>
            <input type="number" id="max_results" name="max_results" min="1" max="150" required placeholder="Enter max results">

            <label for="language" id="language-select-label">Summery language:</label>
            <select id="language" name="language">
                <option value="English">English</option>
                <option value="Spanish">Spanish</option>
                <option value="French">French</option>
                <option value="German">German</option>
                <option value="Chinese">Chinese</option>
                <option value="Japanese">Japanese</option>
                <option value="Korean">Korean</option>
                <option value="Portuguese">Portuguese</option>
            </select>

            <label for="attributes" id="attributes-label">Attributes (comma-separated):</label>
            <input type="text" id="attributes" name="attributes" placeholder='Enter attributes (e.g., "Year, Research Field, Title")' value="Year, Research Field, Title, Clinical Application, Methodology, Outcomes">

            <input type="submit" value="Run">
        </form>

        <p id="api-warning">Due to API charge issue, the max result is 150</p>
        <p id="note">Note: Frequent requests may result in temporary bans by the Google Scholar API. Please use responsibly.</p>

        <div class="developer">
            <p>Developed by Songxiang Tang. University of Melbourne, CAREN lab.</p>
        </div>
    </div>
    <script>
        // 切换语言功能
        const btnEn = document.getElementById('btn-en');
        const btnZh = document.getElementById('btn-zh');

        const texts = {
            en: {
                title: "Research Summary Application",
                topicLabel: "Research Topic:",
                startYearLabel: "Start Year:",
                endYearLabel: "End Year:",
                maxResultsLabel: "Max Results:",
                languageSelectLabel: "Language:",
                attributesLabel: "Attributes (comma-separated):",
                note: "Note: Frequent requests may result in temporary bans by the Google Scholar API. Please use responsibly.",
                apiWarning: "Due to API charge issue, the max result is 150",
            },
            zh: {
                title: "研究论文总结应用",
                topicLabel: "研究主题:",
                startYearLabel: "开始年份:",
                endYearLabel: "结束年份:",
                maxResultsLabel: "最大结果:",
                languageSelectLabel: "语言:",
                attributesLabel: "属性(用逗号分隔):",
                note: "注意:频繁请求可能会导致 Google Scholar API 的临时封禁。请负责任地使用。",
                apiWarning: "由于 API 收费问题,最大结果为 150",
            }
        };

        btnEn.addEventListener('click', () => {
            document.getElementById('app-title').innerText = texts.en.title;
            document.getElementById('topic-label').innerText = texts.en.topicLabel;
            document.getElementById('start-year-label').innerText = texts.en.startYearLabel;
            document.getElementById('end-year-label').innerText = texts.en.endYearLabel;
            document.getElementById('max-results-label').innerText = texts.en.maxResultsLabel;
            document.getElementById('language-select-label').innerText = texts.en.languageSelectLabel;
            document.getElementById('attributes-label').innerText = texts.en.attributesLabel;
            document.getElementById('note').innerText = texts.en.note;
            document.getElementById('api-warning').innerText = texts.en.apiWarning;
        });

        btnZh.addEventListener('click', () => {
            document.getElementById('app-title').innerText = texts.zh.title;
            document.getElementById('topic-label').innerText = texts.zh.topicLabel;
            document.getElementById('start-year-label').innerText = texts.zh.startYearLabel;
            document.getElementById('end-year-label').innerText = texts.zh.endYearLabel;
            document.getElementById('max-results-label').innerText = texts.zh.maxResultsLabel;
            document.getElementById('language-select-label').innerText = texts.zh.languageSelectLabel;
            document.getElementById('attributes-label').innerText = texts.zh.attributesLabel;
            document.getElementById('note').innerText = texts.zh.note;
            document.getElementById('api-warning').innerText = texts.zh.apiWarning;
        });

        document.querySelectorAll('.year-picker').forEach(input => {
            input.addEventListener('input', function() {
                this.value = this.value.replace(/[^0-9]/g, '').slice(0, 4);
            });
        });
    </script>

<script>
    document.getElementById('howToUseBtn').addEventListener('click', function() {
        const instructions = document.getElementById('instructions');
        if (instructions.style.display === 'none') {
            instructions.style.display = 'block';
        } else {
            instructions.style.display = 'none';
        }
    });
</script>
</body>
</html>

参考链接

GitHub - SongxiangT/Research-Summarizer-: ChatGPT Reasearch paper auto-Summarizer (自动文章检索总结器-表格形式)


希望这篇博客对你的项目有帮助!

在当今信息爆炸的时代,科研人员每天都会面对大量的文献资料。为了更高效地筛选和理解这些资料,我们开发了一款基于ChatGPT和Google Scholar的文章摘要工具。它能够自动抓取Google Scholar上的研究文章,并利用OpenAI的GPT模型进行摘要生成,同时支持多语言输出,帮助打破语言障碍,加速科研进程。

项目介绍

本项目的目标是通过以下两方面提升科研效率:

  1. 跨语言阅读:通过多语言摘要功能,帮助非英语母语的研究人员更好地理解英文文献。
  2. 提高效率:通过自动化工具,减少人工筛选文献和撰写摘要的时间。

功能亮点

  • 抓取Google Scholar上的研究文章 📚
  • 利用OpenAI的GPT模型进行摘要生成 🤖
  • 支持用户自定义摘要内容 🛠️
  • 将生成的摘要和文章链接保存为CSV和文本文件 💾
  • 支持多种语言的摘要输出 🌐
  • 这里做出了一个Web App, 不知最后能不能上线,可以预览一下这个程序的功能:

1. 依赖工具

在开始之前,请确保您的系统安装了以下工具:

  • Python 3.7 及以上版本 🐍
  • OpenAI API Key 🔑

2. 安装依赖包

pip install openai==0.28
pip install scholarly
pip install python-dotenv

3. 项目结构

src                       #分段的项目代码
├── main.py               #主文件 (GitHub 里面是分开的,我这里全部和一块)
├── env.dotenv            # 环境文件,更改来设置ChatGPT版本和你的api key
└── requirements.txt      # 需要的包文件,pip install -r requirements.txt 来下载需要的包

4. 核心代码

函数 get_filtered_articles(topic, start_year, end_year, max_results) 。topic是研究相关方向,start_year为开始年份,end_year为结束年份(用于筛选),max_results是最大返回的文章数,根据需要调整。利用 scholarly 来从谷歌学术上爬取文章,注意,不能使用的太频繁,可能导致API被谷歌学术暂时封禁,需要等8小时左右才能继续使用。

from scholarly import scholarly

def get_filtered_articles(topic, start_year, end_year, max_results):
   """
   从 Google Scholar 获取与某个主题相关的文章并按年份进行过滤。
   变量 max_results:要获取的最大文章数。
   变量 topic:要搜索的研究主题。
   变量 start_year:过滤文章的起始年份。
   变量 end_year:过滤文章的结束年份。
   """
    search_query = scholarly.search_pubs(topic)
    articles = []

    for article in search_query:
        # 查看文章的出版年份
        pub_year_str = article['bib'].get('pub_year', '0')
        pub_year = int(pub_year_str) if pub_year_str.isdigit() else 0

        # 显示进度
        print(f"Progress: {len(articles)}/{max_results}")
        print(f"Found article: {article['bib']['title']} ({pub_year})")

        # 如果达到所需的文章数量则停止
        if len(articles) >= max_results:
            break

        # 过滤目标文章
        if start_year <= pub_year <= end_year:
            abstract = article['bib'].get('abstract', 'No abstract available')
            articles.append({
                'title': article['bib']['title'],
                'abstract': abstract,
                'year': pub_year,
                'url': article.get('pub_url', 'No URL available')
            })

    return articles

函数 analyze_with_chatgpt(abstract, attributes=None, language='English')。attributes是你可以自定义你的表格样式例如【年份,方向,标题,临床运用,方法,结果】, language为总结语言。这个prompt  运行很好,你也可以自定义。

举例的env.dotenv 文件,创建一个空文件,命名和后缀为env.dotenv。

# OpenAI API Key
OPENAI_API_KEY=your-open-ai-api-key-here

# GPT Model Version
GPT_VERSION=gpt-3.5-turbo
import openai
import os
from dotenv import load_dotenv  # Import dotenv to load environment variables

# 从 .env 文件加载环境变量
load_dotenv(dotenv_path='../env.dotenv')

# 从环境变量加载 OpenAI API 密钥
openai.api_key = os.getenv('OPENAI_API_KEY')

def analyze_with_chatgpt(abstract, attributes=None, language='English'):
   """
   使用 OpenAI 的 GPT 模型分析摘要并返回汇总表格式。

   :param abstract:研究论文的摘要。
   :param attribute:表格格式的属性列表。默认值为 ["年份", "研究领域", "标题", "临床应用", "方 
   法论", "结果"]。
   :param language:汇总表的语言。默认值为“英语”。
   :return:字符串形式的汇总表格式。
   """
    # 如果没有提供,则设置默认属性
    if attributes is None:
        attributes = ["Year", "Research Field", "Title", "Clinical Application", "Methodology", "Outcomes"]

    # 根据属性创建字符串以进行快速自定义
    attributes_str = ", ".join(attributes)

    # 准备 ChatGPT 的promote
    prompt = (
        f"Please summarize the following research abstract in table format with attributes "
        f"[{attributes_str}]. Keep the answer brief and in the same format every time. "
        f"Respond in {language}:\n{abstract}"
    )

    # Make the API call
    response = openai.ChatCompletion.create(
        model=os.getenv('OPENAI_MODEL', 'gpt-3.5-turbo'),
        messages=[
            {
                "role": "user",
                "content": prompt
            }
        ]
    )
    return response.choices[0].message['content']

 函数保存结果到csv 和 txt 文件 (注意!ChatGPT  生成的表格有横线,一般需要把他parse掉,有时生成,有时不生成,所以导致结果里面会有几条格式对不上,但只是占少数)

import csv

def save2Csv(table_data_list, csv_file):
   """
   将表格字符串列表保存到 CSV 文件。

   变量 table_data_list:格式与 Markdown 表格类似的表格字符串列表。
   变量 csv_file:输出 CSV 文件的名称(默认为“example_output.csv”)。
   """
    with open(csv_file, mode='w', newline='', encoding='utf-8') as file:
        writer = csv.writer(file)

        # 循环遍历列表中的每个表
        for table_data in table_data_list:
            # Parse the table into rows of data
            lines = table_data.strip().split("\n")

            # 提取标题和数据
            headers = [header.strip() for header in lines[0].strip("|").split("|")]
            rows = []
            for line in lines[2:]:  # Skip the header and separator lines
                row = [cell.strip() for cell in line.strip("|").split("|")]
                rows.append(row)

            # 仅为第一个表写入标题
            if file.tell() == 0:
                writer.writerow(headers)

            # 写入行
            writer.writerows(rows)

    print(f"Data successfully saved to {csv_file}")

def save2Textt(table_data_list, text_file="example_output.txt"):
   """
   将代表表行的字典列表保存到文本文件。

   变量 table_data_list:字典列表,其中每个字典代表表中的一行。
   变量 text_file:输出文本文件的名称(默认为“example_output.txt”)。
   """
    with open(text_file, mode='w', encoding='utf-8') as file:
        # Write headers based on the first dictionary's keys
        headers = list(table_data_list[0].keys())
        header_line = "| " + " | ".join(headers) + " |"
        separator_line = "| " + " | ".join(["-" * len(header) for header in headers]) + " |"

        # Write headers and separator to the text file
        file.write(header_line + "\n")
        file.write(separator_line + "\n")

        # Loop through each dictionary in the list and write its values as a table row
        for row_dict in table_data_list:
            row_line = "| " + " | ".join(str(row_dict[key]) for key in headers) + " |"
            file.write(row_line + "\n")

        file.write("\n")

    print(f"Data successfully saved to {text_file}")

现在就是运行代码的主函数了🙌 

from scholarly_search import get_filtered_articles
from gpt_analysis import analyze_with_chatgpt
from data_saver import save2Csv
from data_saver import save2Textt


def main():
    results = []
    # 用户定义的属性和语言
    user_defined_attributes = ["Year", "Research Field", "Title", "Clinical Application", "Methodology", "Outcomes"]
    summary_language = "English"  # You can change to other languages like "Spanish", "French", "Chinese", etc.

    # The basic paras
  """
   从 Google Scholar 获取与某个主题相关的文章并按年份进行过滤。
   变量 max_results:要获取的最大文章数。
   变量 topic:要搜索的研究主题。
   变量 start_year:过滤文章的起始年份。
   变量 end_year:过滤文章的结束年份。
"""
    max_results = 10  # Adjust based on needs
    topic = "computer assisted rehabilitation environment"
    start_year = 2019
    end_year = 2024

    articles = get_filtered_articles(topic, start_year, end_year, max_results)

    if not articles:
        print("No articles found in the specified date range.")
        return

    for article in articles:
        print(f"Title: {article['title']}")
        print(f"Year: {article['year']}")
        print(f"Abstract: {article['abstract']}")
        parsed = analyze_with_chatgpt(article['abstract'], user_defined_attributes, summary_language)
        print(parsed)
        results.append({
            'summary': parsed,
            'url': article['url']
        })
        print(f"URL: {article['url']}")
        print("\n" + "=" * 80 + "\n")

    save2Csv([result['summary'] for result in results], "research_articles.csv")
    save2Textt(results, "research_articles.txt")

if __name__ == "__main__":
    main()

5. 将其转为web app

只是一个简单的运用,我用Flask 做的,如果您需要您可以前往我的GitHub 页面。这里展示一下HTML的设计和理念吧,把全部的上传上来太多字数了。HTML的目的是让没有编程基础的人也可以用这个功能,主要也就是可视化上面提及的变量。CSS以下代码没有呈现,需要可以自己手搓一个。 

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Research Summary Application</title>
</head>
<body>
    <div class="container">
        <h1 id="app-title">Research Summary Application</h1>
        <div class="language-selector">
            <button id="btn-en">English</button>
            <button id="btn-zh">中文</button>
        </div>
        <form method="POST">
            <label for="topic" id="topic-label">Research Topic: (Please input in English)</label>
            <input type="text" id="topic" name="topic" required placeholder="Enter the research topic">

            <label for="start_year" id="start-year-label">Start Year:</label>
            <input type="text" id="start_year" name="start_year" class="year-picker" placeholder="YYYY" required>

            <label for="end_year" id="end-year-label">End Year:</label>
            <input type="text" id="end_year" name="end_year" class="year-picker" placeholder="YYYY" required>

            <label for="max_results" id="max-results-label">Max Results:</label>
            <input type="number" id="max_results" name="max_results" min="1" max="150" required placeholder="Enter max results">

            <label for="language" id="language-select-label">Summery language:</label>
            <select id="language" name="language">
                <option value="English">English</option>
                <option value="Spanish">Spanish</option>
                <option value="French">French</option>
                <option value="German">German</option>
                <option value="Chinese">Chinese</option>
                <option value="Japanese">Japanese</option>
                <option value="Korean">Korean</option>
                <option value="Portuguese">Portuguese</option>
            </select>

            <label for="attributes" id="attributes-label">Attributes (comma-separated):</label>
            <input type="text" id="attributes" name="attributes" placeholder='Enter attributes (e.g., "Year, Research Field, Title")' value="Year, Research Field, Title, Clinical Application, Methodology, Outcomes">

            <input type="submit" value="Run">
        </form>

        <p id="api-warning">Due to API charge issue, the max result is 150</p>
        <p id="note">Note: Frequent requests may result in temporary bans by the Google Scholar API. Please use responsibly.</p>

        <div class="developer">
            <p>Developed by Songxiang Tang. University of Melbourne, CAREN lab.</p>
        </div>
    </div>
    <script>
        // 切换语言功能
        const btnEn = document.getElementById('btn-en');
        const btnZh = document.getElementById('btn-zh');

        const texts = {
            en: {
                title: "Research Summary Application",
                topicLabel: "Research Topic:",
                startYearLabel: "Start Year:",
                endYearLabel: "End Year:",
                maxResultsLabel: "Max Results:",
                languageSelectLabel: "Language:",
                attributesLabel: "Attributes (comma-separated):",
                note: "Note: Frequent requests may result in temporary bans by the Google Scholar API. Please use responsibly.",
                apiWarning: "Due to API charge issue, the max result is 150",
            },
            zh: {
                title: "研究论文总结应用",
                topicLabel: "研究主题:",
                startYearLabel: "开始年份:",
                endYearLabel: "结束年份:",
                maxResultsLabel: "最大结果:",
                languageSelectLabel: "语言:",
                attributesLabel: "属性(用逗号分隔):",
                note: "注意:频繁请求可能会导致 Google Scholar API 的临时封禁。请负责任地使用。",
                apiWarning: "由于 API 收费问题,最大结果为 150",
            }
        };

        btnEn.addEventListener('click', () => {
            document.getElementById('app-title').innerText = texts.en.title;
            document.getElementById('topic-label').innerText = texts.en.topicLabel;
            document.getElementById('start-year-label').innerText = texts.en.startYearLabel;
            document.getElementById('end-year-label').innerText = texts.en.endYearLabel;
            document.getElementById('max-results-label').innerText = texts.en.maxResultsLabel;
            document.getElementById('language-select-label').innerText = texts.en.languageSelectLabel;
            document.getElementById('attributes-label').innerText = texts.en.attributesLabel;
            document.getElementById('note').innerText = texts.en.note;
            document.getElementById('api-warning').innerText = texts.en.apiWarning;
        });

        btnZh.addEventListener('click', () => {
            document.getElementById('app-title').innerText = texts.zh.title;
            document.getElementById('topic-label').innerText = texts.zh.topicLabel;
            document.getElementById('start-year-label').innerText = texts.zh.startYearLabel;
            document.getElementById('end-year-label').innerText = texts.zh.endYearLabel;
            document.getElementById('max-results-label').innerText = texts.zh.maxResultsLabel;
            document.getElementById('language-select-label').innerText = texts.zh.languageSelectLabel;
            document.getElementById('attributes-label').innerText = texts.zh.attributesLabel;
            document.getElementById('note').innerText = texts.zh.note;
            document.getElementById('api-warning').innerText = texts.zh.apiWarning;
        });

        document.querySelectorAll('.year-picker').forEach(input => {
            input.addEventListener('input', function() {
                this.value = this.value.replace(/[^0-9]/g, '').slice(0, 4);
            });
        });
    </script>

<script>
    document.getElementById('howToUseBtn').addEventListener('click', function() {
        const instructions = document.getElementById('instructions');
        if (instructions.style.display === 'none') {
            instructions.style.display = 'block';
        } else {
            instructions.style.display = 'none';
        }
    });
</script>
</body>
</html>

参考链接

GitHub - SongxiangT/Research-Summarizer-: ChatGPT Reasearch paper auto-Summarizer (自动文章检索总结器-表格形式)


希望这篇博客对你的项目有帮助!

发布评论

评论列表 (0)

  1. 暂无评论