You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Conclusion-making

From EverybodyWiki Bios & Wiki


Conclusions often include information on analytic confidence, open questions or issues or further research needs, confidence intervals, and recommendations, consisting of high-level abstract interpretation and sensemaking of a work's results or data. In science, conclusion-making or drawing conclusions is the last step of the scientific method, or second-last if a step 'reporting results' is included.[1]

Conclusion-making can be distinguished from decision-making and judgments in that conclusions are final[2][3] decisions, for example after a process of deliberation or at the end of a study, are only reasoned judgments,[3] and that they – at least in the scientific context – are established with "careful regard to evidence, but without regard to consequences of specific actions in specific circumstances".[4] A final decision to use a conclusion for a specified intended use may still be less a conclusion than a decision.[5]

The methods or ways of how conclusions are being made are a subject of research in the contexts of scientific literature, education, cognitive neuroscience, collective intelligence, and artificial intelligence.

Argumentative sentences are often used in daily communication and have important role in each decision or conclusion making process – one approach is to explore them computationally via argument annotation and analysis using deep learning.[6] More broadly, in artificial intelligence, human conclusion-making is reproduced,[7][8][9] or it can improve conclusion-making.[10]

In science and education[edit]

The contents of academic papers often include, a by now more or less established, standard section called "Conclusions", with common similar or synonymous section titles including "Conclusion" or "Discussion".[11] The section is often the last section of a study before the references and may be the most commonly read sections of studies,[12] right after their shorter summarizing abstracts and the title(s).[13][14] Sometimes a conclusions section is part of a subdivided (or "structured")[15] abstract section. Conclusions are to be distinguished from "Results" sections of studies or study results with the former interpreting, summarizing,[16] finding meaning and significance in,[17][18][12] and contextualizing[19] the results data. Often, the decision whether or not to read the entire or more of an article is made only after reading the abstract and conclusions sections, which is recommended by some.[20] Sometimes these sections also point out identified knowledge gaps.[21]

In medical sciences, abstracts often contain a small conclusion section as this word cloud of abstract sections of 302 studies in leading biomedical journals[22] shows.

Metascience studies can list and analyze a variety of factors that could lead to unjustified conclusions that are not accounted for in the typical statistical measures used such as statistical significance or standard deviations.[23] Assessing whether studies' conclusions match their data was a common advice by experienced academics, for which also looking at the figures early may be useful.[24] Often, studies only include results that led to its conclusions, not also describing dead-ends and non-results.[25]

According to Tukey, the scientific body grows by the reaching of conclusions, which are to be accepted to subsequently be taken into the body of knowledge, "not just into the guidebook of advice for immediate action, as would be the case with a decision" and "something of lasting value extracted from the data".[4] He suggests that the conclusion is to "remain accepted, unless and until unusually strong evidence to the contrary arises and "accepted subject to future rejection, when and if the evidence against it becomes strong enough".[4]

In science education and science-based education[edit]

Educational textbooks "generally give the right answer or the conclusion rather than clarify the interpretive process, including pitfalls, wrong paths, and misunderstandings that occur along the way."[25] Schwab criticized in 1962 that science was too commonly taught as:[25]

Good knowledge of conclusion-making methods is essential for learning about natural and social phenomena – such as analogy, inductive and deductive reasoning and how such are deployed, including proving claims, systematization of knowledge, and checking hypotheses.[26] Students' conclusion-making skills can be improved with digital technologies.[27][28]

A study investigated how nursing students deferred their conclusions and sought guidance from "others", usually registered nurses or sometimes doctors, when they wanted clarification or confirmation about their interpretation or implications.[29]

In science-based societal practices[edit]

In clinical decision-making, mistakes of accepting a diagnosis conclusion before it has been fully verified have been called 'premature closure'.[30]

Modern humans make widespread use of induction and deduction, including scientists of modern society. France Bacon first formalized induction in his 1620 book Novum Organum and advised that facts should be assimilated without bias to reach a conclusion.[1]

Artificial intelligence[edit]

A biomedical AI making conclusions after being trained and provided with multimodal data[31] (more)

Conclusion-making may be an important element of various artificial intelligence systems.[32] A researcher suggests that one could say that an agent exhibits rational thinking "if it is able to provide reasons for what it does or what it believes" where "[e]xternal observers can verify whether a system thinks rationally if the system uses an understandable language to describe its own beliefs and justifications about how conclusions are reached".[33] This capability is especially useful in the context of what is called explainable artificial intelligence,[33][9][34] which relates to transparency and logical replicability. Neural networks "formulate the final conclusion-making process as the classification or generation task" and "lack in explaining how they perform induction and reasoning".[35] Interpretability can be valuable by verifying the logic for a specific conclusion.[36] If the system uses "an understandable language to describe its own beliefs and justifications about how conclusions are reached",[33] it may be unclear if that indeed and fully describes the deployed actual reasoning. Some researchers have argued open-source artificial intelligence in the context of transparency and verifiability.[37][38]

Machines can deduce conclusions "from beliefs" using efficient algorithms based on automated reasoning like automated theorem provers and can also "automate other types reasoning such as approximate reasoning (e.g., using probabilistic representations) or analogical reasoning (e.g., case-based reasoning)".[33]

Semantic Web research indicates "semantic-based search", e.g. using SPARQL, can retrieve information from complex and heterogeneous database systems and "generate logical conclusions" on the requested issues.[39]

Conclusion-making is also be used by computational reasoning systems such as diagnostic aides and similar medical systems, sometimes in distinct units.[40][41][42][43][44][45][33] Especially for such systems, being able to see and understand systems' reasoning or getting "an [accurate] explanation" of how an answer was obtained is "crucial for ensuring trust and transparency".[46] In recursive processes, such algorithms are in some designs given feedback on whether their conclusions are correct.[47]

In argument map systems[edit]

Example partial argument tree with claims and corresponding impact votes for arguments within the given line of reasoning, one form of collective determination of argument weights that is used on Kialo.[48]

A 2022 security studies paper found that in general current systems cannot reliably automate analysis or synthesis of arguments in the same way that statistical packages can automate analysis of data".[49] There is research into how to efficiently calculate the winning arguments or arguments weights and the overall conclusions.[50]

On the collaborative argument map website Kialo, users can vote on the overall debate topic as well as on individual claims to express their perspectives or conclusions, with the rationale (i.e. the main causal arguments) why they voted on the veracity of the thesis as they did not being captured.[51] This represents the platform's algorithm of collective determination of argument weights and theses' veracities[48] which has a plurality component in that the user can also switch between the perspectives of specific users and some groups of users (e.g. supporters and opposers of a thesis).[52] It features at least five key components of conclusion-making or the understanding thereof in the context historical-political education: perspectivity, relevance levels, interdependence, multicausality and assessments.[53]

Research and development[edit]

Research investigates various conclusion-making algorithms such as "ranking-based semantics based on the propagation of the weights of arguments, that give a higher weight to non-attacked arguments".[54] In terms of reliability and quality, such may require the sites to not exclude any valid arguments as long as they are relevant. Argumentation graphs can and were also built collaboratively with the open source software Argüman.[54]

Conclusion-making methods could be applied to debates or their data, e.g. via bipolar weighted argumentation frameworks, to find out what the current conclusion of debates like "Computer Science is not actually a science" is.[55] Meaningful implementations may require more possible outcomes than binary yes-or-no tendencies, e.g. highlighting key arguments or limitations. This may include "summarizing the contentious and agreed-upon points of a discussion".[56] The adoption or implementation of conclusion-making methods could address a problem with Kialo and Argüman where "it's not clear how a user would go about trying to absorb the gist of a debate by navigating an argument tree" as trees can become "extremely dense, and the interface does not [necessarily] make it obvious which arguments the user should pay attention to".[57] Conclusion-making could also enable summaries that could be used at other argument maps to which a structured debate is related. Debates or parts of it could be collaboratively summarized and condensed.[58]

Researchers have developed a way to explicitly model an argument's conclusion and shown that this conclusion-generation better enables the generation of stronger counter-arguments, especially when the attacked claim leaves its conclusion implicit. The stance of such counter-arguments needs to be opposite to that conclusion, which they suggest is key to "effective counter-argument generation", albeit counter-arguments can also attack an argument's premises or their connection to the conclusion.[59][60]

In general, there can be estimated degrees or known differences in the degrees of certainty "that confirms, or not, initial assumptions".[39]

In neuroscience[edit]

Computational neuroscientists have shown that people with higher intelligence scores in Human Connectome Project cognitive tests took more time to solve difficult problems and that their higher synchrony between brain areas allowed for better integration of evidence (or progress) from preceding working memory sub-problem processing. Reducing synchrony in "avatar" simulations, that were adjusted and tuned towards personalization, "led decision-making circuits to quickly jump to conclusions". Their codified results may be useful for an understanding of cognition to replicate or imitate in bio-inspired computing.[61][62] An earlier study indicates best participants in an experiment were the ones "who jumped to an early speculation but then deliberately tested it", since the initial hypothesis gave them a basis for seeking data that would be diagnostic.[63] Conclusion making can be considered as part of a step in creative thinking in problem-solving,[64] and can also be considered to be a distinct higher order thinking activity,[65] and conclusion making investigated as a distinct capability in students from a cognitive perspective.[66]

See also[edit]

References[edit]

  1. 1.0 1.1 McComas, William (24 August 2020). Nature of Science in Science Instruction: Rationales and Strategies. Springer Nature. pp. 48–49. ISBN 978-3-030-57239-6. Search this book on
  2. "Conclusion Definition & Meaning | Britannica Dictionary". www.britannica.com. Retrieved 21 July 2023.
  3. 3.0 3.1 "Definition of CONCLUSION". www.merriam-webster.com. 18 July 2023. Retrieved 21 July 2023.
  4. 4.0 4.1 4.2 Tukey, John W. (1960). "Conclusions vs Decisions". Technometrics. 2 (4): 423–433. doi:10.2307/1266451. ISSN 0040-1706. JSTOR 1266451.
  5. De Bièvre, Paul (1 July 2004). ""Decisions" vs "conclusions"". Accreditation and Quality Assurance. 9 (8): 439–440. doi:10.1007/s00769-004-0785-2. ISSN 1432-0517. Thus, arriving at a result (quantity value with an associated measurement uncertainty) from a given measurement, is one thing. It is a conclusion. Stating a result with a suitable uncertainty (based on an appropriate coverage factor) for a specified intended use, is another thing. It is a decision. Unknown parameter |s2cid= ignored (help)
  6. Suhartono, Derwin; Gema, Aryo Pradipta; Winton, Suhendro; David, Theodorus; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni (19 October 2020). "Argument annotation and analysis using deep learning with attention mechanism in Bahasa Indonesia". Journal of Big Data. 7 (1): 90. doi:10.1186/s40537-020-00364-z. ISSN 2196-1115.
  7. "Application in Artificial Intelligence".
  8. Tomas, Boris (2014). "Cortex simulation system proposal using distributed computer network environments". arXiv:1403.5701 [cs.AI].
  9. 9.0 9.1 Geilings, Brian (2021). "Using Artificial Intelligence to positively impact the beer brewing process". www.theseus.fi. Explainable Artificial Intelligence (XAI) strives to generate a clear output and reasoning to what has led to a certain conclusion. [...] The usage of neural networks is inspired by the human brain and tries to replicate the conclusion-making process of humans
  10. Sharma, Devansh; Patel, Prachi; Shah, Manan (2 May 2023). "A comprehensive study on Industry 4.0 in the pharmaceutical industry for sustainable development". Environmental Science and Pollution Research. 30 (39): 90088–90098. doi:10.1007/s11356-023-26856-y. PMC 10153053 Check |pmc= value (help). PMID 37129827 Check |pmid= value (help).
  11. "The difference between abstract and conclusion". 28 September 2020. Retrieved 12 September 2023.
  12. 12.0 12.1 "Research Paper Conclusion: Know How To Write It? | Elsevier". Elsevier Author Services - Articles. 23 March 2022. Retrieved 11 September 2023.
  13. Pain, Elisabeth (2016). "How to (seriously) read a scientific paper". Science. Retrieved 12 September 2023.
  14. Keshav, S. "How to Read a Paper" (PDF). Retrieved 12 September 2023.
  15. "Structured Abstracts – What are structured abstracts?". Retrieved 12 September 2023.
  16. Andrade, Chittaranjan (2011). "How to write a good abstract for a scientific paper or conference presentation". Indian Journal of Psychiatry. 53 (2): 172–175. doi:10.4103/0019-5545.82558. ISSN 0019-5545. PMC 3136027. PMID 21772657.
  17. "Conclusions". The Writing Center • University of North Carolina at Chapel Hill. Such a conclusion will help them see why all your analysis and information should matter to them after they put the paper down. Your conclusion is your chance to have the last word on the subject. The conclusion allows you to have the final say on the issues you have raised in your paper, to synthesize your thoughts, to demonstrate the importance of your ideas, and to propel your reader to a new view of the subject.
  18. "Introductions & Conclusions". Retrieved 12 September 2023.
  19. "How to Write Discussions and Conclusions". PLOS. 16 October 2020.
  20. Subramanyam, Rv (2013). "Art of reading a journal article: Methodically and effectively". Journal of Oral and Maxillofacial Pathology. 17 (1): 65–70. doi:10.4103/0973-029X.110733. PMC 3687192. PMID 23798833.
  21. "FAQ: What is a research gap and how do I find one?". Retrieved 12 September 2023.
  22. Clotworthy, Amy; Davies, Megan; Cadman, Timothy J.; Bengtsson, Jessica; Andersen, Thea O.; Kadawathagedara, Manik; Vinther, Johan L.; Nguyen, Tri-Long; Varga, Tibor V. (10 May 2023). "Saving time and money in biomedical publishing: the case for free-format submissions with minimal requirements". BMC Medicine. 21 (1): 172. doi:10.1186/s12916-023-02882-y. ISSN 1741-7015. PMC 10170849 Check |pmc= value (help). PMID 37161428 Check |pmid= value (help).
  23. "Explained: Sigma". MIT News | Massachusetts Institute of Technology. 9 February 2012. Retrieved 20 July 2023.
  24. Hubbard, Katharine E.; Dunbar, Sonja D. (2017). "Perceptions of scientific research literature and strategies for reading papers depend on academic career stage". PLOS ONE. 12 (12): e0189753. Bibcode:2017PLoSO..1289753H. doi:10.1371/journal.pone.0189753. PMC 5746228. PMID 29284031.
  25. 25.0 25.1 25.2 Linn, Marcia C.; Davis, Elizabeth A.; Bell, Philip (2004). Internet Environments for Science Education. Routledge. ISBN 978-1-135-63183-3. Search this book on
  26. Glavche, Metodi; Malčeski, Risto; Malčeska, Cvetanka (2017). "Learning the conclusion making methods in the instruction in the natural-mathematical group of subjects" (PDF).
  27. Purba, Siska Wati Dewi; Hwang, Wu-Yuin (July 2017). "Investigation of Learning Behaviors and Their Effects to Learning Achievement Using Ubiquitous-Physics App". 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT). pp. 446–450. doi:10.1109/ICALT.2017.10. ISBN 978-1-5386-3870-5. Unknown parameter |s2cid= ignored (help) Search this book on
  28. Swartz, Clinton Keith (2012). "Digital data collection and analysis : what are the effects on students' understanding of chemistry concepts". The findings showed that significant correlations existed among hypothesis-making, interpreting graphs, applying formulas, conclusion-making, conceptual understanding, and post-test. After an in-depth investigation, we found that interpreting graphs and conceptual understanding were the two most important factors to affect learning achievement. Additionally, students perceived that U-Physics was beneficial to their physics learning.
  29. Tower, Marion; Watson, Bernadette; Bourke, Alison; Tyers, Emma; Tin, Anne (November 2019). "Situation awareness and the decision‐making processes of final‐year nursing students". Journal of Clinical Nursing. 28 (21–22): 3923–3934. doi:10.1111/jocn.14988. hdl:10072/417308. ISSN 0962-1067. PMID 31260577. Unknown parameter |s2cid= ignored (help)
  30. Raz, Manda; Pouryahya, Pourya (29 May 2021). Decision Making in Emergency Medicine: Biases, Errors and Solutions. Springer Nature. p. 293. ISBN 978-981-16-0143-9. Search this book on
  31. Tu, Tao; et al. (2023). "Towards Generalist Biomedical AI". arXiv:2307.14334 [cs.CL].
  32. Rodriguez, Emilio S. Corchado; Snasel, Vaclav; Abraham, Ajith; Wozniak, Michal; Grana, Manuel; Cho, Sung-Bae (15 March 2012). Hybrid Artificial Intelligent Systems: 7th International Conference, HAIS 2012, Salamanca, Spain, March 28-30th, 2012, Proceedings, Part II. Springer. ISBN 978-3-642-28931-6. What is most important intelligent systems is making conclusions. The "intelligence" of the system is revealed through its ability to make decisions (via the conclusion making process), and also thorough[sic] its ability to learn and acquire knowledge. The intelligent systems, apart from classical information system include neural networks, fuzzy systems, decision trees and genetic algorithms (evolution algorithms). An intelligent system features the ability to acquire new knowledge, self-adapt, accept faulty or deficient data, and is creative at the same time. Search this book on
  33. 33.0 33.1 33.2 33.3 33.4 Molina, Martin (2020). "What is an intelligent system?". arXiv:2009.09083 [cs.CY].
  34. Paul, Debleena; Sanap, Gaurav; Shenoy, Snehal; Kalyane, Dnyaneshwar; Kalia, Kiran; Tekade, Rakesh K. (January 2021). "Artificial intelligence in drug discovery and development". Drug Discovery Today. 26 (1): 80–93. doi:10.1016/j.drudis.2020.10.010. PMC 7577280 Check |pmc= value (help). PMID 33099022 Check |pmid= value (help).
  35. Zhang, Wenbo; Tang, Likai; Mo, Site; Liu, Xianggen; Song, Sen (6 December 2022). "Learning Robust Rule Representations for Abstract Reasoning via Internal Inferences". Advances in Neural Information Processing Systems. 35: 33550–33562. Retrieved 12 September 2023.
  36. Gastounioti, Aimilia; Kontos, Despina (1 May 2020). "Is It Time to Get Rid of Black Boxes and Cultivate Trust in AI?". Radiology: Artificial Intelligence. 2 (3): e200088. doi:10.1148/ryai.2020200088. ISSN 2638-6100. PMC 7259191 Check |pmc= value (help). PMID 32510520 Check |pmid= value (help).
  37. Liesenfeld, Andreas; Lopez, Alianda; Dingemanse, Mark (19 July 2023). "Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators". Proceedings of the 5th International Conference on Conversational User Interfaces. ACM. pp. 1–6. arXiv:2307.05532. doi:10.1145/3571884.3604316. ISBN 9798400700149. Unknown parameter |s2cid= ignored (help) Search this book on
  38. Spirling, Arthur (18 April 2023). "Why open-source generative AI models are an ethical way forward for science". Nature. 616 (7957): 413. Bibcode:2023Natur.616..413S. doi:10.1038/d41586-023-01295-4. PMID 37072520 Check |pmid= value (help). Unknown parameter |s2cid= ignored (help)
  39. 39.0 39.1 Intelligent computing systems: emerging application areas. Berlin Heidelberg: Springer. 2016. ISBN 978-3-662-49177-5. Search this book on
  40. Piecha, Jan (2001). "The neural network selection for a medical diagnostic system using an artificial data set". Journal of Computing and Information Technology. 9 (2): 123. doi:10.2498/cit.2001.02.03.
  41. Piecha, Jan (30 June 2001). "The Neural Network Selection for a Medical Diagnostic System using an Artificial Data Set". Journal of Computing and Information Technology. 9 (2): 123–132. doi:10.2498/cit.2001.02.03. ISSN 1330-1136. The paper describes experiments with a neural network selection that works as a conclusion-making unit [...] The discussed methods of the neural network selection and training show how to avoid difficulties with limited number of available data records, needed for the conclusion algorithms effectiveness improvement.
  42. Zyguła, J.; Piecha, J.; Zięba, T. (2006). "Automatic conclusions making on neurological diseases by means of distributed data acquisition". Journal of Medical Informatics & Technologies. 10. ISSN 1642-6037.
  43. Chandzlik, S. (2006). "The method of neuron weight vector initial values selection in Kohonen network". Journal of Medical Informatics & Technologies. 10. ISSN 1642-6037. Diagnosing of morbid conditions by means of automatic tools supported by computers is a significant and often used element in modern medicine. Some examples of these tools are automatic conclusion-making units of Parotec System for Windows (PSW).
  44. Chandzlik, S.; Piecha, J. (2002). "A patient walk-data-record modelling using a spline interpolation method". Journal of Medical Informatics & Technologies. 3. ISSN 1642-6037. The record length is limited to an efficient size for training the Conclusion-Making Unit (CMU).
  45. "The neural network conclusion-making system for foot abnormality recognition".
  46. Gohel, Prashant; Singh, Priyanka; Mohanty, Manoranjan (12 July 2021). "Explainable AI: current status and future directions". arXiv:2107.07045 [cs.LG].
  47. von Ulmenstein, Ulrich; Tretter, Max; Ehrlich, David B.; Lauppert von Peharnik, Christina (1 August 2022). "Limiting medical certainties? Funding challenges for German and comparable public healthcare systems due to AI prediction and how to address them". Frontiers in Artificial Intelligence. 5. doi:10.3389/frai.2022.913093. PMC 9376350 Check |pmc= value (help). PMID 35978652 Check |pmid= value (help).
  48. 48.0 48.1 Durmus, Esin; Ladhak, Faisal; Cardie, Claire (2019). "The Role of Pragmatic and Discourse Context in Determining Argument Impact". Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 5667–5677. arXiv:2004.03034. doi:10.18653/v1/D19-1568. Unknown parameter |s2cid= ignored (help) Search this book on
  49. "Complexity Demands Adaptation: Two Proposals for Facilitating Better Debate in International Relations and Conflict Research". Georgetown Security Studies Review. 30 November 2022.
  50. Young, Anthony P. "Likes as Argument Strength for Online Debate" (PDF). Retrieved 10 June 2023.
  51. Carroll, John M.; Sun, Na; Beck, Jordan (2019). "Creating Dialectics to Learn: Infrastructures, Practices, and Challenges". Learning in a Digital World. Smart Computing and Intelligence. Springer. pp. 37–58. doi:10.1007/978-981-13-8265-9_3. ISBN 978-981-13-8265-9. Unknown parameter |s2cid= ignored (help) Search this book on
  52. Beck, Jordan; Neupane, Bikalpa; Carroll, John M. "Managing Conflict in Online Debate Communities: Foregrounding Moderators' Beliefs and Values on Kialo". doi:10.31219/osf.io/cdfq7. Unknown parameter |s2cid= ignored (help)
  53. "Urteilsbildung digital mit Kialo - Wie die historische Urteilsbildung im Distanz- und Präsenzunterricht mit Kialo digital unterstützt werden kann (PRIMA-Modell)". www.friedrich-verlag.de. Retrieved 20 July 2023.
  54. 54.0 54.1 "Argumentation Ranking Semantics based on Propagation". Retrieved 13 June 2023.
  55. Delobelle, Jérôme (12 December 2017). Ranking-based Semantics for Abstract Argumentation (phdthesis). Université d'Artois.
  56. Schneider, Jodi; Groza, Tudor; Passant, Alexandre. "A Review of Argumentation for the Social Semantic Web" (PDF).
  57. Yuan, An (2018). Collective debate (Thesis). Massachusetts Institute of Technology. hdl:1721.1/122893. Retrieved 13 June 2023.
  58. Tian, Sunny(Sunny Y. ). (2020). Wikum+: integrating discussion and summarization in collaborative writing (Thesis). Massachusetts Institute of Technology. hdl:1721.1/127530. Retrieved 12 September 2023.
  59. Alshomary, Milad; Wachsmuth, Henning (2023). "Conclusion-based Counter-Argument Generation". arXiv:2301.09911 [cs.CL].
  60. Thorburn, Luke; Kruger, Ariel (2022). "Optimizing Language Models for Argumentative Reasoning" (PDF).
  61. "Schlau heißt nicht schnell: Intelligente Gehirne "ticken" oft langsamer | MDR.DE". MDR (in Deutsch). Retrieved 24 June 2023.
  62. Schirner, Michael; Deco, Gustavo; Ritter, Petra (23 May 2023). "Learning how network structure shapes decision-making for bio-inspired computing". Nature Communications. 14 (1): 2963. Bibcode:2023NatCo..14.2963S. doi:10.1038/s41467-023-38626-y. ISSN 2041-1723. PMC 10206104 Check |pmc= value (help). PMID 37221168 Check |pmid= value (help).
  63. Klein, Gary; Moon, Brian; Hoffman, Robert R. (1 July 2006). "Making Sense of Sensemaking 1: Alternative Perspectives". IEEE Intelligent Systems. 21 (4): 70–73. doi:10.1109/MIS.2006.75. ISSN 1541-1672. Unknown parameter |s2cid= ignored (help)
  64. Novitasari, Dwi; Triutami, Tabita Wahyu; Wulandari, Nourma Pramestie; Rahman, Abdul; Alimuddin, Alimuddin (28 August 2020). "Students' Creative Thinking in Solving Mathematical Problems Using Various Representations". Proceedings of the 1st Annual Conference on Education and Social Sciences (ACCESS 2019). Atlantis Press. pp. 99–102. doi:10.2991/assehr.k.200827.026. ISBN 978-94-6239-047-8. The steps of creative thinking can be explained in four steps, as follows: (1) Preparation, including information gathering (symbolic representation) and problem translating, (2) Incubation, including ideas and conjectures constructing, (visual and symbolic representations), connecting and recalling the appropriate concepts to solve the problem (3) Illumination, including ideas designing and applying (visual representation) and (4) Verification, including solution testing (symbolic representation) and conclusion making. Unknown parameter |s2cid= ignored (help) Search this book on
  65. "Higher order thinking task and question application in the world cognition lessons in primary forms" (PDF).
  66. Kong, Siu Cheung (May 2013). "Developing information literacy through domain knowledge learning in digital classrooms". The study found that the students had a statistically significant growth in IL competency in the cognitive perspective, especially in tasks on searching target information and applying the information for supporting critical thinking, relationship interpretation and conclusion making.


This article "Conclusion-making" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Conclusion-making. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.