21 research outputs found

    Towards expert systems for improved customer services using ChatGPT as an inference engine.

    Get PDF
    By harnessing both implicit and explicit customer data, companies can develop a more comprehensive understanding of their consumers, leading to better customer engagement and experience, and improved loyalty. As a result, businesses have embraced many AI technologies, including chatbots, sentiment analysis, voice assistants, predictive analytics, and natural language processing, within customer services and e-commerce. The arrival of ChatGPT, a state-of-the-art deep learning model trained with general knowledge in mind, has brought about a paradigm shift in how companies approach AI applications. However, given that most business problems are bespoke and require specialised domain expertise, ChatGPT needs to be aligned with the requisite task-oriented ability to solve these issues. This paper presents an iterative procedure that incorporates expert system development process models and prompt engineering, in the design of descriptive knowledge and few-shot prompts, as are necessary for ChatGPT-powered expert systems applications within customer services. Furthermore, this paper explores potential application areas for ChatGPT-powered expert systems in customer services, presenting opportunities for their effective utilisation in the business sector

    Advancing AI with green practices and adaptable solutions for the future. [Article summary]

    Get PDF
    Despite AI's achievements, how can its limitations be addressed to reduce computational costs, enhance transparency and pioneer eco-friendly practices

    Unsupervised Temporospatial Neural Architecture for Sensorimotor Map Learning

    Get PDF
    Peer reviewedPostprin

    Monitoring carbon emissions using deep learning and statistical process control: a strategy for impact assessment of governments' carbon reduction policies.

    Get PDF
    Across the globe, governments are developing policies and strategies to reduce carbon emissions to address climate change. Monitoring the impact of governments' carbon reduction policies can significantly enhance our ability to combat climate change and meet emissions reduction targets. One promising area in this regard is the role of artificial intelligence (AI) in carbon reduction policy and strategy monitoring. While researchers have explored applications of AI on data from various sources, including sensors, satellites, and social media, to identify areas for carbon emissions reduction, AI applications in tracking the effect of governments' carbon reduction plans have been limited. This study presents an AI framework based on long short-term memory (LSTM) and statistical process control (SPC) for the monitoring of variations in carbon emissions, using UK annual CO2 emission (per capita) data, covering a period between 1750 and 2021. This paper used LSTM to develop a surrogate model for the UK's carbon emissions characteristics and behaviours. As observed in our experiments, LSTM has better predictive abilities than ARIMA, Exponential Smoothing and feedforward artificial neural networks (ANN) in predicting CO2 emissions on a yearly prediction horizon. Using the deviation of the recorded emission data from the surrogate process, the variations and trends in these behaviours are then analysed using SPC, specifically Shewhart individual/moving range control charts. The result shows several assignable variations between the mid-1990s and 2021, which correlate with some notable UK government commitments to lower carbon emissions within this period. The framework presented in this paper can help identify periods of significant deviations from a country's normal CO2 emissions, which can potentially result from the government's carbon reduction policies or activities that can alter the amount of CO2 emissions

    The Pure-Emic User Interface Design Methodology for an Online Community Policing Hub

    Get PDF
    The pervasiveness of the internet and internet-ready devices has greatly facilitated the rapid and unprecedented adoption of online social networks and their attendant online communities. In this case, an online community is a group of people who are tied together by a common interest, purpose, goal, practice, etc. Hence, the centre of attraction to an online community is the interest, the purpose or the goal that the members share or will stand to achieve either collectively or individually. Online Community Hub (OCH) is a collaborative virtual platform on the web where the entire people, institutions, technologies, tools, resources and service delivery frameworks that are relevant for any community are made visible and accessible. Online Community Policing Hub (OCPH) is an Online Community Hub where the shared community interest, purpose or goal is community policing. The thrust of the work in this paper is to present a methodical approach to a community centered user interface design for an Online Community Policing Hub (OCPH). The paper presents a Pure-Emic User Interface Design (PEUID) approach underpinned by knowledge from various fields such as ergonomics, cognitive psychology, anthropology and software engineering in ensuring the well-being of the system users and the need for zero-training without deviations from the users’ mental model of the system. Keywords: Geo-Community, Online community, Community policing, Community Hub, Ergonomics, Anthropology, Software Engineerin

    A class-specific metaheuristic technique for explainable relevant feature selection.

    Get PDF
    A significant amount of previous research into feature selection has been aimed at developing methods that can derive variables that are relevant to an entire dataset. Although these approaches have revealed substantial improvements in classification accuracy, they have failed to address the problem of explainability of outputs. This paper seeks to address this problem of identifying explainable features using a class-specific feature selection method based on genetic algorithms and the one-vs-all strategy. Our proposed method finds relevant features for each class in the dataset and uses these features to enable more accurate classification, and also interpretation of the outputs. The results of our experiments demonstrate that the proposed method provides descriptive insights into prediction outputs, and also outperforms popular global feature selection techniques in the classifications of high dimensional and noisy datasets. Since there are no known challenging benchmark datasets for evaluating class-specific feature selection algorithms, this paper also recommends an approach for combining disparate datasets for this purpose

    Comparative analysis of Mechanisms for Categorization and Moderation of User Generated Text Contents on a Social E-Governance Forum

    Get PDF
    This paper presents a comparative analysis of two mechanisms for an automated categorization and moderation of User Generated Text Contents (UGTCs) on a social e-governance forum. Posts on the forum are categorized into ā€œrelevantā€, ā€œirrelevant but interestingā€ and ā€œmust be removedā€. Relevant posts are those posts that are capable of supporting government decisions; irrelevant but interesting category consists of posts that are not relevant but can entertain or enlighten other users; must be removed posts consists of abusive or obscene posts. Two classifiers, Support Vector Machine (SVM) with One-Vs-The-Rest technique and Multinomial Naive Bayes were trained, evaluated and compared using Scikit-learn. The results show that SVM with an accuracy score of 96% on test set performs better than Naive Bayes with 88.6% accuracy score on the same test set

    Automated Well Log Pattern Alignment and History-Matching Techniques : An Empirical Review and Recommendations

    Get PDF
    Acknowledgement This work was supported by the Scottish Funding Council, Advanced Innovation Voucher, and ANSA Data Analytics.Peer reviewedPostprin

    Machine learning algorithms for stroke risk prediction leveraging on explainable artificial intelligence techniques (XAI).

    Get PDF
    Stroke poses a significant global health challenge, contributing to widespread mortality and disability. Identifying predictors of stroke risk is crucial for enabling timely interventions, thereby reducing the increasing impact of strokes. This research addresses this imperative by employing Explainable Artificial Intelligence (XAI) techniques to pinpoint stroke risk predictors. To bridge existing gaps, six machine learning models were assessed using key performance metrics. Utilising the Synthetic Minority Over-sampling Technique (SMOTE) to minimize the impact of the imbalanced nature of the dataset used in this research, the Random Forest algorithm emerged as the most effective among the algorithms with an accuracy of 94.5%, AUC-ROC of 0.95, recall of 0.96, precision of 0.93, and an F1 score of 0.95. This study explored the interpretation of these algorithms and results using Local Interpretable Model-agnostic Explanations (LIME) and Explain Like I'm Five (ELI5). With the interpretations, healthcare providers can gain insight into patients' stroke risk predictions. Key stroke risk factors highlighted by the study include Age, Marital Status, Glucose Level, Body Mass Index, Work Type, Heart Disease, and Gender. This research significantly contributes to healthcare and healthcare informatics by providing insights that can enhance strategies for stroke prevention and management, ultimately leading to improved patient care. The identified predictors offer valuable information for healthcare professionals to develop targeted interventions, fostering a proactive approach to mitigating the impact of strokes on individuals and the healthcare system
    corecore